Government Notifies 3-Hour Takedown Rule for Deepfakes, AI Content

Centre Introduces Mandatory Labels, Faster Removal for Synthetic Media
The Union Government has notified stringent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, imposing enhanced obligations on online platforms for the regulation of artificial intelligence-generated and synthetic content, including deepfakes.
Under the amended framework, intermediaries such as social media platforms will be required to remove flagged AI-generated or deceptive content within three hours of receiving directions from a court or a competent authority, significantly reducing the earlier compliance window of 36 hours.
The amendments mark one of the most decisive regulatory interventions by the Centre in response to the growing misuse of generative artificial intelligence in the digital ecosystem.
The amendments were notified by the Ministry of Electronics and Information Technology (MeitY) on February 10 and will come into force from February 20, 2026.
The operative effect of the notification requires platforms to act with near-immediacy once content is flagged, particularly where such content is found to be illegal, deceptive, sexually exploitative, non-consensual, or impersonatory in nature.
As per the amended rules, platforms must deploy automated tools and technical safeguards to prevent the hosting and dissemination of unlawful AI-generated content. The regulatory emphasis is on proactive detection, rather than mere reactive takedown.
The scope of prohibited content expressly includes non-consensual material, child sexual abuse material, content relating to forged or false documents, impersonation and content involving explosives or other serious offences.
Notably, the rules clarify that AI-generated or synthetically altered material will be treated on par with other forms of information when determining illegality under the IT Rules, eliminating any ambiguity around differential standards for AI-based content.
A key feature of the amendment is the formal introduction of a definition for “synthetically generated information.”
The rules define such content as material that is created or altered using artificial intelligence in a manner that appears authentic or realistic, while expressly excluding routine editing, accessibility enhancements, and bona fide educational or design-related modifications.
This distinction seeks to balance regulatory oversight with legitimate creative and functional uses of AI tools.
The notification also introduces strict mandatory labelling requirements; Platforms that enable the creation or sharing of AI-generated or synthetic content are required to ensure that such content is clearly and prominently labelled to inform users of its artificial nature. Where technically feasible, intermediaries must embed permanent metadata, identifiers, or other provenance markers within AI-generated content.
The three-hour takedown requirement applies once a lawful order is issued by a court or a competent authority. Significantly, earlier the time span was 36 hour, however, recognising the potential threats involved in such cases, the same has been reduced to 3 hour now.
The compressed timeline underscores the Government’s concern about the rapid virality and real-world harm caused by deepfakes and other manipulated media. According to the regulatory intent reflected in the notification, delayed takedowns risk irreversible reputational damage, electoral interference, financial fraud, and public disorder, necessitating swift enforcement.
The amended rules also reiterate intermediaries’ obligation to periodically notify users of prohibited conduct and the consequences of non-compliance.
Platforms are required to update their terms of service, privacy policies, and user agreements accordingly and to issue regular advisories warning against the misuse of AI tools.
Failure to comply with these due diligence obligations may result in intermediaries losing their statutory safe harbour protection under Section 79, exposing them to potential civil and criminal liability for third-party content.
Source: Press Trust of India
