The Union government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, significantly altering the regulatory framework governing online content in India. Issued under Section 87 of the Information Technology Act, 2000, and effective from February 20, 2026, the amendments introduce a formal definition of “synthetically generated information,” impose mandatory labelling and metadata requirements for certain AI-generated content, and sharply reduce takedown timelines for intermediaries.
Beyond addressing deep fakes and non-consensual synthetic imagery, the notification also restructures executive takedown authority and conditions safe harbour protection more explicitly on active compliance. The three-hour removal window for court orders and authorised government intimations marks a substantial shift from the earlier 36-hour framework. While the amendments respond to documented harms arising from AI misuse, they also expand administrative discretion and increase compliance pressure on platforms—raising important questions about proportionality, due process, and the risk of over-removal.
A close reading of the Gazette text suggests that the impact of these changes will depend not only on their stated objectives, but on how the enhanced takedown powers and compressed timelines are exercised in practice.
Formal definition of “synthetically generated information”
For the first time, the Rules define “synthetically generated information” as audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears real and depicts a person or event likely to be perceived as authentic.
The definition is focused on deception and perceptual realism. Routine editing, accessibility tools, formatting, transcription and good-faith technical corrections are excluded, provided they do not materially alter the meaning of content.
Compared to the draft released in October 2025, the final rules narrow the scope. As reported by Mint, the government dropped the proposal to watermark 10% of general online content and confined labelling requirements to content that materially misrepresents persons or events.
This narrowing addresses some industry concerns about over breadth. However, the definition remains perception-based and could potentially capture satire, parody or political manipulation depending on interpretation.
Mandatory labelling and metadata requirements
Intermediaries that enable the creation or dissemination of synthetic content must:
- Ensure clear labelling of such content;
- Provide audio disclosures where applicable;
- Embed permanent metadata or provenance markers;
- Prevent removal or suppression of such markers.
These requirements are framed as transparency obligations. The objective appears to be traceability and user awareness.
However, two concerns arise:
- Technical feasibility and cross-platform interoperability — not all platforms may be able to uniformly embed and preserve provenance markers, particularly where content travels across services.
- Privacy and surveillance implications — embedding permanent identifiers may allow tracking beyond immediate moderation needs.
The Rules state that metadata must be embedded “to the extent technically feasible,” but no standards are specified. This leaves compliance interpretation to executive discretion.
Prohibition and automated safeguards
The Rules require intermediaries to deploy reasonable and appropriate technical measures to prevent the generation or dissemination of unlawful synthetic content. The Gazette does not mandate any specific technological solution. The amendments require intermediaries to deploy automated and technical safeguards to prevent synthetic content involving:
- Child sexual abuse material;
- Non-consensual intimate imagery;
- False electronic records;
- Impersonation;
- Obscenity;
- Content relating to explosives, arms or ammunition;
- Deceptive portrayal of individuals or events.
The inclusion of non-consensual deep fake pornography is a significant development, given the documented increase in such cases.
However, mandating automated safeguards raises operational and rights-based concerns. Automated detection systems are prone to error, bias and over blocking — especially in politically sensitive contexts. Without procedural safeguards or appeal transparency requirements, erroneous removals may be difficult to contest.
User declaration and verification obligations
Significant Social Media Intermediaries (SSMIs) must:
- Obtain a declaration from users stating whether content is synthetically generated;
- Deploy technical tools to verify such declarations;
- Ensure prominent disclosure if content is confirmed synthetic.
Failure to act may result in loss of due diligence protection under Section 79 of the IT Act. This requirement effectively shifts platforms from reactive moderation to proactive verification. The obligation to verify user declarations may require AI-based detection systems, increasing reliance on automated moderation.
Given the three-hour takedown window (discussed below), platforms may choose conservative enforcement strategies, increasing the likelihood of over-removal.
Reduced takedown timelines
The most operationally significant change is the reduction of takedown timelines:
- Government or court orders: 36 hours → 3 hours
- Non-consensual intimate imagery: 24 hours → 2 hours
- Complaint resolution: 72 hours → 36 hours
- Grievance acknowledgement: 15 days → 7 days
The three-hour compliance window applies specifically to removal or disabling of access upon receipt of a court order or a written, reasoned intimation issued by an authorised government officer of the prescribed rank. It does not apply to all user complaints. The reduction from 72 hours to 36 hours applies to specified unlawful content categories under grievance redressal provisions. The three-hour timeline is limited to formal governmental or judicial directions. The government has argued, as reported by Mint, that platforms have the technical capacity to act within minutes and that government requests form a small proportion of total removals. However, experience from prior litigation suggests concerns about misuse of takedown powers are not hypothetical.
In litigation before the Karnataka High Court, X (formerly Twitter) argued that government notices lacked adequate reasoning and were procedurally deficient. Although the High Court dismissed X’s plea, the case highlighted recurring issues regarding:
- Insufficiently reasoned takedown notices;
- Lack of transparency;
- Pressure to comply within tight deadlines.
As reported by Scroll, advocacy group Internet Freedom Foundation (IFF) has warned that compressed timelines, combined with expanded executive powers, may increase over-removal and chill lawful expression.
When liability risk is high and time is limited, platforms are likely to remove content first and review later.
Clarification of authorities empowered to issue takedown notices
The amendments specify that takedown directions may be issued:
- By a court of competent jurisdiction;
- By a government official not below the rank of Joint Secretary or Director through a written, reasoned intimation;
- By police officers not below Deputy Inspector General rank.
Notices must specify the legal basis, statutory provision invoked, and the precise URL or electronic location of the content. A monthly review by a Secretary-level officer is required to ensure necessity and proportionality. The monthly review mechanism is internal to the executive and does not create an independent or judicial oversight structure. The Rules do not require publication of review outcomes.
On paper, this introduces greater formalisation compared to the earlier reference to “appropriate government or its agency.”
However, two structural concerns remain:
- Executive dominance — court orders are not mandatory; executive officials retain independent takedown authority.
- Limited transparency — the Rules do not require publication of takedown statistics, redacted orders, or independent oversight.
As per Scroll, IFF has criticised the amendments as entrenching opacity and weakening procedural safeguards, particularly since they were notified without fresh public consultation.
Safe Harbour: Narrowed through due diligence
Safe harbour protection under Section 79 remains conditional upon compliance with due diligence obligations. The Rules clarify that removal of unlawful or synthetic content, including through automated means, will not by itself jeopardise immunity. However, immunity may be affected where an intermediary knowingly permits or fails to act upon prohibited content.
The effect is to condition immunity more tightly on active compliance.
In combination with compressed timelines, this may incentivise platforms to err on the side of removal in borderline cases.
Below is a table based on the recent amendments that have been made.
| Issue Area | Earlier IT Rules, 2021 | Amended IT Rules, 2026 | Nature of Change |
| Recognition of Synthetic Content | No formal definition of AI-generated or synthetic content. | Formal definition of “synthetically generated information” covering AI-created/altered audio, visual and audio-visual content that appears real or authentic. | Introduces new legal category targeting deep fakes and AI impersonation. |
| Scope of Synthetic Content Regulation | Deep fakes regulated indirectly through general unlawful content provisions (defamation, obscenity, impersonation etc.). | Synthetic content expressly included within the definition of “information” for unlawful acts. | Clarifies that AI content is fully subject to IT Rules. |
| Exclusions | No AI-specific exclusions. | Explicit exclusions for routine editing, accessibility tools, formatting, academic material, good-faith technical corrections not materially altering content. | Narrows scope to deceptive synthetic content. |
| Mandatory Labelling of Synthetic Content | No specific requirement. | Platforms enabling synthetic content must ensure clear and prominent labelling (visual labels / audio disclosures). | New transparency obligation. |
| Metadata / Provenance Markers | No such requirement. | Mandatory embedding of permanent metadata or provenance markers, including unique identifiers (to extent technically feasible). | Introduces traceability requirement. |
| Removal of Labels by Users | No provision. | Intermediaries prohibited from allowing removal or suppression of synthetic content labels/metadata. | Prevents circumvention. |
| User Declaration (SSMIs) | No such requirement. | Significant Social Media Intermediaries must obtain user declaration whether content is synthetic. | Introduces proactive compliance duty. |
| Verification of Declaration | Not applicable. | Platforms must deploy technical tools to verify user declarations. | Shifts from passive hosting to verification model. |
| Automated Safeguards | General obligation to exercise due diligence. | Intermediaries must deploy reasonable and appropriate technical measures to prevent unlawful synthetic content. No specific technology mandated. | Introduces AI-focused compliance obligation with flexibility in implementation. |
| Categories of Prohibited Synthetic Content | Covered under general unlawful content provisions. | Explicit reference to child sexual abuse material, non-consensual intimate imagery (including deep fakes), false electronic records, impersonation, obscenity, explosives/arms-related content, deceptive portrayals. | Specific targeting of deep fake harms. |
| Takedown Timeline – Government or Court Orders | 36 hours from receipt of court order or government notification. | 3 hours from receipt of a court order or a written, reasoned intimation issued by authorised government officer (JS/Director rank or above; DIG for police). | Significant reduction in compliance window; applies specifically to formal orders/intimations. |
| Takedown Timeline – Non-consensual Intimate Imagery | 24 hours. | 2 hours. | Accelerated victim protection timeline. |
| General Complaint Resolution Timeline | 72 hours in specified cases. | 36 hours for certain unlawful content complaints (where specified in the Rules). Not all user complaints trigger the 3-hour rule. | Reduced grievance resolution timeline; 3-hour window does NOT apply universally to user reports. |
| Grievance Acknowledgement Timeline | 15 days. | 7 days. | Reduced acknowledgement timeline. |
| Authority to Issue Takedown Orders | “Appropriate government or its agency.” | Court of competent jurisdiction; government official not below Joint Secretary/Director; police officer not below DIG rank. | Clarifies rank and authority threshold. |
| Form of Takedown Notice | Not expressly detailed in Rules. | Must be reasoned, in writing, specify legal basis, statutory provision, precise URL/identifier. | Introduces formalisation requirement. |
| Review Mechanism | Limited structured review in Rules. | Monthly internal review by officer not below Secretary rank to assess necessity and proportionality. No requirement of public disclosure or independent oversight. | Adds executive-level review; not judicial or independent. |
| Safe Harbour (Section 79) | Immunity subject to due diligence compliance. | Removal of unlawful or synthetic content (including via automated tools) will not affect safe harbour, provided due diligence obligations are met. Safe harbour may be lost where intermediary knowingly permits or fails to act upon prohibited synthetic content. | Clarifies immunity but conditions it more tightly on active compliance. |
| User Awareness Obligations | Annual communication of policies. | Users must be informed at least once every three months about prohibited content, consequences, privacy and grievance redress. | Increases frequency of disclosure. |
| Criminal Law References | Indian Penal Code referenced. | References updated to Bharatiya Nyaya Sanhita, 2023. | Alignment with new criminal code. |
| Watermarking Proposal (Draft Stage) | Draft proposed watermarking up to 10% of online content. | Final notification removed this requirement; narrowed labelling to deceptive synthetic content. | Significant dilution from draft proposal. |
| Compliance Window for Intermediaries After Notification | Not applicable. | 10-day window before rules come into force (20 February 2026). | Short transition period. |
Risk of over breadth and chilling effects
The amendments aim to address genuine harms — including deep fake pornography, impersonation scams and misinformation.
However, the regulatory design raises concerns:
- Short compliance windows reduce scope for contextual evaluation.
- Automated safeguards may suppress lawful content, including satire or political critique.
- Executive takedown authority remains broad, with limited independent review.
- Procedural safeguards are internal rather than judicial.
India has previously witnessed allegations of overbroad or insufficiently reasoned takedown orders. In the absence of transparency requirements or appeal mechanisms within the Rules themselves, concerns about misuse persist.
Constitutional Implications
Under Article 19(1)(a), any restriction on speech must satisfy reasonableness and proportionality.
The amendments pursue legitimate aims — protection against deception, exploitation and harm. However, proportionality requires that restrictions be narrowly tailored and accompanied by adequate safeguards.
Key questions that may arise in future litigation include:
- Whether a three-hour takedown window is proportionate in all categories of speech;
- Whether executive-issued takedown notices provide sufficient procedural fairness;
- Whether automated moderation requirements lead to systematic over-removal;
- Whether metadata embedding raises privacy concerns.
Conclusion
The amended IT Rules represent a significant expansion of regulatory oversight over synthetic and AI-generated content. They respond to real harms, particularly non-consensual deep fakes and impersonation.
At the same time, the framework strengthens executive takedown powers, shortens compliance timelines, and conditions safe harbour more strictly on active intervention by intermediaries.
The complete rules may be accessed here.
Related:
More than 100 YouTube channels blocked under new IT Rules: GOI
Madras HC restrains action against digital news platforms under IT Rules 2021
IT Rules: Oversight mechanism may rob the media of its independence, says Madras HC
The wide terms of the IT Rules 2021 have a chilling effect on freedom of speech: Bom HC
Draft DPDP Rules, 2025, seeds of both surveillance and freedom

