News

AI vs Authenticity: How India is Confronting the Rise of Deepfakes

In a recent and pivotal legal development, the Delhi High Court witnessed a compelling suit filed by actors Aishwarya Rai and Abhishek Bachchan, challenging the unauthorized digital exploitation of their likenesses. This case, building on earlier landmark actions to safeguard personality rights, particularly against misuse through generative AI, highlights a formidable global challenge: the growing menace of deepfakes and the erosion of digital authenticity.

The term “deepfake”—a fusion of “deep learning” and “fake”—originated in 2017 when an online user employed a neural network to generate fabricated media. The entertainment industry was the first to feel its impact as celebrity faces were crudely inserted into compromising content, sparking legal and ethical alarm. What began as crude forgeries has since evolved into highly sophisticated, photorealistic manipulations, thanks to advances in artificial intelligence and Generative Adversarial Networks (GANs). These dual neural systems—one generating synthetic content, the other detecting it—have propelled deepfakes to near-perfect realism, making them difficult to detect with the naked eye.

While deepfake technology has legitimate applications—such as recreating performances in films or aiding speech therapy in medicine—its darker potential dominates. From political disinformation to financial fraud, the technology poses serious risks to society.

In response, a parallel industry of detection tools has emerged. Tech majors and startups are developing algorithms to spot subtle anomalies in AI-generated content, such as irregular blinking or inconsistent lighting. Notably, Google has collaborated with UC Riverside on UNITE, a detection system, while India has introduced Faceoff, a homegrown multimodal AI platform under the “Make in India” banner. Faceoff’s Adaptive Cognito Engine (ACE) assesses biometric and behavioral cues—including facial micro-expressions, voice sentiment, and eye movement—to deliver a comprehensive “trust score.” Unlike tools that rely on a single parameter, its holistic approach enhances detection accuracy and data security.

The Delhi High Court’s judgment, coupled with technological responses like Faceoff and UNITE, marks a crucial step in building a framework for digital personhood. Together, legal protections and detection innovations form a powerful response to the phantom likenesses that threaten to distort reality. As digital and physical worlds converge, safeguarding one’s online persona is no longer the privilege of celebrities but a fundamental right for all—ushering in an era where the authenticity of our digital selves becomes a cornerstone of civil liberties.

Author: Dr Subroto Kumar Panda