India’s New IT Rules Force Platforms to Grow Up in the Age of AI

India’s New IT Rules Mark a Turning Point in Platform Accountability for AI-Generated Content

Update: 2026-02-15 05:41 GMT

With the amended Information Technology Rules coming into force on February 20, India has drawn a firm regulatory line in the sand for social media platforms and AI-driven services. Artificial intelligence generated content will now be treated at par with other forms of objectionable material and must be taken down within two to three hours of receiving a government or court order. Platforms are also required to ensure mandatory labelling and traceability of AI-generated content, alongside active obligations to prevent the circulation of unlawful deepfakes.

Taken together, these changes represent one of the most significant recalibrations of intermediary liability in India’s digital governance framework. The amendments make it clear that the state no longer views platforms as neutral conduits of information. Instead, they are expected to function as active gatekeepers in an online ecosystem increasingly shaped by synthetic media, automation, and virality.

From a policy perspective, India’s approach mirrors a broader global trend. Jurisdictions such as China, the United Kingdom, and the European Union have already imposed accountability on platforms for identifying and labelling AI-generated content. What distinguishes India’s framework, however, is the speed and decisiveness of its compliance expectations. With response timelines running into hours rather than days, platforms are being pushed to fundamentally rethink their internal response systems, product design, and governance structures.

According to Sudhir Mishra, Managing Partner at TrustLegal, the amendments signal a marked shift in how the law conceptualises the role of intermediaries.

“AI-generated content is now being treated on par with other objectionable content, with strict takedown timelines following government or court orders,” he says. “The obligation to label AI-generated content has been clearly placed on platforms, reflecting the government’s tightening control over misinformation and harmful synthetic media.”

Mishra points out that this shift is not merely procedural. “From the point of view of a legal professional, the amendment shows a clear movement towards greater accountability and active regulation of digital platforms. Through mandatory labelling, traceability, and obligations to prevent illegal deepfakes, the law aims to counter the growing threat of misinformation. Intermediaries are being reminded that they cannot continue to operate as passive hosts in the digital space.”

This repositioning of platform responsibility sits at the heart of the amendments. While safe harbour protections under the IT Act remain intact in principle, the threshold for claiming them has been substantially raised. Due diligence is no longer a box-ticking exercise. It is now tied to technological capability, speed of response, and the ability to proactively identify and address AI-driven harms.

Petal Chandhok, Partner at TrustLegal, views the amendments as a decisive policy turn rather than an incremental regulatory update. “The amended rules reflect a clear shift towards greater accountability and active regulation of digital platforms,” she says. 

“By mandating labelling and traceability of AI-generated content and placing affirmative obligations to prevent unlawful deepfakes, the government is responding to the increasing risks posed by synthetic media.”

Chandhok emphasises that the intent behind the amendments is grounded in user protection and public trust. “These changes empower users and reinforce the idea that intermediaries must play a more responsible role in the digital ecosystem. From a regulatory standpoint, enhanced due diligence and verification requirements are steps in the right direction towards transparency and trust.”

At the same time, she flags the practical and normative risks that accompany heightened compliance burdens. “The obligations imposed on platforms are significant, both in terms of technology and institutional capacity. There is a need for careful regulation to ensure that these requirements do not result in excessive censorship or defensive takedowns,” she cautions.

The concern around over-moderation is amplified by the short compliance timelines introduced by the amendments. When platforms are required to act within hours, the margin for contextual evaluation narrows. In such scenarios, automated enforcement and conservative content moderation practices may become the default, potentially affecting legitimate speech.

Another dimension of the amendments that Chandhok highlights is implementation readiness. “The effectiveness of these provisions will ultimately depend on how they are applied in practice. Social media companies and individuals operating platforms will need to invest in training and internal preparedness to meet the new legal standards,” she says. She also underscores the importance of user awareness, noting that grievance redressal mechanisms must be made accessible and visible. “Raising awareness about grievance redressal has already become a critical exercise after the Digital Personal Data Protection framework came into effect, and it will be equally important under the amended IT Rules.”

While TrustLegal’s partners focus on accountability and user protection, the regulatory design and structural implications of the amendments are examined closely by Supratim Chakraborty, Partner at Khaitan & Co. He describes the 2026 amendments as “a calibrated attempt to address the risks posed by synthetically generated content.”



“By formally defining synthetically generated information, mandating labelling and provenance measures, and tightening response timelines, the government is signalling that platform governance must evolve alongside technological capability,” Chakraborty says. “This is one of the first instances in India where AI-generated content is being directly addressed within a binding regulatory framework.”

Notably, Chakraborty points out that the rules stop short of regulating AI systems themselves. “The amendments do not regulate AI models or systems per se. Instead, they regulate AI outputs at the distribution layer. This is a pragmatic approach in the absence of a standalone AI law, particularly when the most immediate harms arise from the dissemination of synthetic content rather than its creation.”

This distribution-centric framework allows the government to address issues such as deepfakes, impersonation, and digitally amplified misinformation without venturing into complex questions around model training, datasets, or algorithmic governance. It also preserves safe harbour protections for intermediaries that can demonstrate reasonable and timely compliance.

However, Chakraborty notes that the operational implications are far-reaching. “Platforms will need to embed metadata and labelling tools, deploy automated detection mechanisms, and recalibrate grievance and takedown workflows within compressed timelines. The short transition window adds to the compliance challenge.”

He also raises interpretive concerns around the scope of the obligations. “The rules extend to intermediaries that offer a computer resource enabling synthetic content. This could raise questions about applicability for different categories of AI service providers,” he explains. “With principle-based standards such as reasonable and appropriate technical measures, much will depend on how regulators interpret these provisions and how platforms operationalise compliance without slipping into defensive over-moderation.”

What emerges from these perspectives is a shared recognition that the amendments fundamentally alter how platforms must think about governance. Compliance is no longer confined to legal teams or post-facto moderation. It is becoming a core product, engineering, and organisational function.

The amendments also reflect a broader philosophical shift in digital regulation. As AI-generated content becomes easier to produce and harder to distinguish, the cost of inaction grows. India’s new IT Rules make it clear that speed, scale, and automation cannot come at the expense of accountability.

Whether the framework ultimately strengthens trust without chilling speech will depend on enforcement, regulatory interpretation, and the willingness of platforms to invest in responsible design. What is certain is that the age of plausible deniability for AI-driven harm is coming to an end. Platforms operating in India are being asked not just to host content, but to govern it.

Tags:    

Similar News