PIL before Supreme Court seeks regulatory framework for Artificial Intelligence tools

Urgent judicial intervention of the Supreme Court to direct the Union of India to establish a national AI regulatory authority, enforce ethical licensing of AI systems, and ensure timely redressal of infringements has been sought.
A PIL has been filed before Supreme Court seeking directions to the Union of India to frame and notify a comprehensive regulatory and licensing framework for Artificial Intelligence tools, including those capable of generating images, videos, and audio (“deepfakes”), and to ensure prevention and redressal of misuse of such AI tools.
The plea filed by Aarati Sah also seeks direction to Meta Platforms Instagram and Facebook and Google LLC along with other AI tool owning entities to institute effective, transparent and time-bound grievance-redressal mechanisms ensuring prompt removal of AI-generated content impersonating real individuals.
Constitution of an Expert Committee of Government officials, jurists, technologists, and civil-society members to recommend standards for ethical AI deployment has also been sought.
Filed through AOR Anilendra Pandey, the petition states the unregulated use of such AI systems has led to widespread misuse, infringing the rights to privacy, dignity, and personality of citizens and public figures alike.
"Over the last year, AI technology has become easily accessible in India, and in the last few months, there has been a sharp rise in AI-generated content impersonating celebrities, journalists, and other public personalities. Multiple High Courts have already granted interim protection to victims of deepfakes, reflecting the urgency and genuineness of the problem. However, the absence of a national regulatory framework has resulted in fragmented judicial intervention and leaves citizens vulnerable to exploitation and defamation", the petition submits.
Citing examples of global jurisdictions, including the European Union, United States, China, and Singapore, which have implemented risk-based AI regulations, content labelling requirements, and enforcement mechanisms to prevent misuse of synthetic media, the petition argues that in India, the absence of such safeguards, coupled with ineffective reporting and grievance mechanisms on platforms such as Meta and Google/YouTube, has allowed AI-generated misinformation to spread unchecked, undermining constitutional rights under Articles 14 and 21.
Supreme Court has further been told that this unregulated exposure to artificial intelligence has triggered a surge of litigation before High Courts having original civil jurisdiction—particularly Delhi and Bombay— wherein public figures have sought protection of their personality rights.
"Each Hon’ble High Court, including the Courts of Delhi and Bombay, has recognized that the misuse of AI-generated images, videos, and audio (“deepfakes”) infringes the right to privacy and reputation, which are fundamental aspects protected under Article 21 of the Constitution of India. However, the judicial interventions to date have been piecemeal and case specific, without a uniform regulatory framework. It is apposite that this Hon’ble Court, in anticipation of a rapidly growing number of similar cases, directs the establishment of a comprehensive national regime that prevents such violations proactively, thereby safeguarding citizens’ fundamental rights and preventing the overburdening of the higher judiciary with repetitive litigation on matters of the same nature", the plea adds.
Relying on the judgment of Justice K.S. Puttaswamy (Retd.) v. Union of India (2018) wherein the Supreme Court recognized the right to privacy as an intrinsic part of Article 21, the petitioner argues the creation and circulation of AI-generated content that superimposes an individual’s face or voice without consent grossly violates informational and bodily privacy, as well as the autonomy of an individual over their persona.
It has also been submitted that intermediary liability provisions under Section 79 of the Information Technology Act, 2000 are being misused by global digital platforms to evade their statutory responsibilities as despite receiving actual notice through their inbuilt reporting mechanisms, these platforms routinely fail to remove infringing or objectionable AI-generated content in a timely manner. Instead of taking down such content, they often disable the complainant’s access or visibility to it, thereby shifting the burden on the aggrieved individual rather than addressing the violation itself, court has been told.
Case Title: AARATI SAH vs. UNION OF INDIA AND ORS