India Moves to Label All AI Content: What the New Rules Mean for Users and Platforms
If you've spent any time on Indian social media lately, you've probably seen something that made you pause and wonder: "Is this real?" A celebrity endorsing a sketchy investment scheme. A politician making inflammatory remarks. A finance minister announcing a too-good-to-be-true crypto partnership. Welcome to the deepfake era, and India just decided it's had enough.
On October 22, 2025, the Ministry of Electronics and Information Technology unveiled draft amendments to the IT Rules requiring platforms to label all AI-generated content with permanent, visible identifiers. If you're creating, sharing, or scrolling through synthetic media—deepfakes, AI-altered videos, or algorithmically generated images—these rules will change how you interact with digital content in India.
The proposal isn't subtle. Visual AI content must carry a marker covering at least ten percent of the surface area; audio clips need an identifier audible during the first ten percent of playback. Think of it as a nutrition label for digital media: you'll know exactly what you're consuming.
Why Now? The Deepfake Problem India Can't Ignore
India's timing isn't accidental. Deepfake fraud has become one of India's most urgent cybersecurity threats, with projected losses of ₹70,000 crore by 2025, and a 550% increase in deepfake-related cybercrime cases since 2019.
The victims list reads like a who's who of Indian public life. In June 2025, scammers used deepfaked videos of Infosys co-founder N.R. Narayana Murthy to relieve a 79-year-old Bengaluru resident of ₹35 lakhs through a bogus trading site. Business tycoon Mukesh Ambani had to publicly deny a viral deepfake claiming he launched an AI trading app, and cricket legend Sachin Tendulkar was forced to clarify that a video showing him promoting an online gaming app was fabricated.
Bollywood hasn't been spared. During the 2024 general elections, deepfake videos of actors Aamir Khan and Ranveer Singh went viral, purportedly showing them criticizing Prime Minister Narendra Modi and endorsing the Congress party. Actors Abhishek Bachchan and Aishwarya Rai Bachchan recently petitioned a Delhi court to block AI-generated videos that allegedly infringed on their likeness and intellectual property.
MeitY explicitly cited the growing prevalence of generative AI and associated risks—including misinformation, impersonation, and election-related manipulation—as the reason for regulatory intervention.
What Platforms Must Do: The Technical Burden
Under the draft rules, platforms must label synthetically generated content, embed permanent metadata or unique identifiers, and verify user declarations regarding AI-generated material. "Synthetically generated information" is defined broadly: any content created, modified, or altered using algorithms in a way that makes it appear authentic.
Social media platforms that enable AI content creation must ensure information is prominently labeled or embedded with permanent unique metadata, and these identifiers must not be alterable, suppressible, or removable.
The rules hit hardest at Significant Social Media Intermediaries—platforms with more than five million registered users in India, such as Facebook, YouTube, Instagram, and X. These giants face additional obligations: they must actively seek user declarations about whether uploaded content is AI-generated and deploy automated tools to verify those claims.
If a platform knowingly allows unlabeled or falsely declared AI-generated content, it will be deemed to have failed in exercising due diligence under the IT Act. That's not just a compliance slap—it means losing the legal shield known as Section 79 safe harbor protection.
Safe Harbor at Stake: What Platforms Risk
Here's where things get serious. Section 79 of the IT Act grants intermediaries conditional exemption from liability for third-party content, provided they observe due diligence requirements. Fail to comply with the new labeling rules, and platforms could be held directly liable for any harmful AI content on their service.
However, if platforms remove or limit access to AI-generated content based on valid complaints or reasonable efforts, the draft amendments protect them from Section 79 violations. It's a carrot-and-stick approach: label properly and act responsibly, or face the music.
For Users: What Changes When You Post
If you're using AI tools—Midjourney for a creative project, ChatGPT to generate images, or any deepfake app—get ready for new friction. Platforms will require users to declare whether their uploaded content is AI-generated, and those declarations must be prominently displayed.
Expect pop-up prompts before you post. Expect visible labels that you can't remove. And if you're caught misrepresenting AI content as real, platforms will have both the technical tools and regulatory pressure to call you out.
India vs. the World: Who's Doing What on AI Labels
India's 10 percent visibility standard represents one of the first explicit attempts globally to prescribe a quantifiable threshold for AI content markers. Compare that to other jurisdictions:
The European Union's AI Act, effective from March 2025, mandates that AI-generated content be labeled with machine-readable formats and detectable markers, with fines up to €15 million or 3% of global annual turnover for non-compliance. China's March 2025 regulations require all online services creating or distributing AI-generated content to clearly label it, and ban watermark removal tools outright. In the United States, California's AI Transparency Act requires platforms with over one million users to disclose AI-generated content starting January 2026.
At the Munich Security Conference in February 2024, major AI players including Google, Meta, Microsoft, OpenAI, and Anthropic committed to the Tech Accord to Combat Deceptive Use of AI in elections, pledging to deploy watermarking technologies. India's proposal aligns with this global momentum but goes further with specific, enforceable visibility thresholds.
The Compliance Clock: What Happens Next
MeitY has invited stakeholder feedback on the draft amendments until November 6, 2025. Tech companies, civil society groups, and the public can submit comments via email to itrules.consultation@meity.gov.in.
Mahesh Makhija, Partner and Technology Consulting Leader at EY India, called the proposed rules "a clear step toward ensuring authenticity in digital content," adding that labeling and non-removable identifiers will help users distinguish real content from synthetic, serving as the foundation for responsible AI adoption.
Once finalized, the rules will require significant technical investment. Platforms may need to build automated labeling systems capable of identifying and marking AI-generated content at the point of creation. For global AI companies operating in India—where cumulative and new AI investment commitments surpassed $20 billion in 2025—compliance costs will be substantial but unavoidable.
What Could Go Wrong: The Challenges Nobody's Talking About
Mandatory labeling sounds great in theory. In practice, it's messy.
First, detection isn't foolproof. Research has shown that it is easy to tamper with or remove watermarks in images, while reliably watermarking text may not even be possible. Sophisticated actors can strip metadata, manipulate pixels, or use generation techniques that evade automated detection.
Second, over-labeling risks. If every minor AI edit—think an Instagram filter powered by machine learning—requires a prominent label, users will experience notification fatigue. The labels become wallpaper, ignored and ineffective.
Third, there's the global interoperability question. The lack of standardization in AI watermarking technologies means that a watermark created by one system may not be readable by another. India's rules will work only if international platforms adopt compatible systems.
Fourth, enforcement. India has 886 million active internet users and counting. Monitoring compliance across thousands of platforms, millions of creators, and billions of pieces of content will strain regulatory capacity. Who verifies the verifiers? Who audits the automated tools? These questions remain unanswered.
The Real Cost: What Indian Users Will Actually Experience
For everyday users, the rules mean clearer signals about what's real and what's synthetic. That's the optimistic take. The pessimistic take? Labels become noise, criminals find workarounds, and users grow more skeptical of everything—labeled or not.
According to the IAMAI-Kantar Internet in India Report 2025, active internet users reached 886 million, an 8% year-on-year rise, and will surpass 900 million by the end of the year. That's a massive, diverse user base with varying levels of digital literacy. For many, what appears on a screen still carries the aura of truth. Labels help, but they're not magic.
Platforms will pass compliance costs down in subtle ways: slower content uploads as automated checks run, more intrusive pop-ups asking for declarations, and possibly restricted access to certain AI features for Indian users if platforms decide compliance is too burdensome.
What This Means for Creators and Businesses
If you're a content creator using AI tools for legitimate purposes—generating backgrounds, enhancing audio, creating illustrations—these rules add friction but also offer protection. A properly labeled AI-generated image can't be weaponized against you as easily. You're covered.
For businesses, particularly those in marketing, media, and entertainment, the rules demand process changes. Every campaign asset touched by AI must be labeled. Every social post using generative tools must carry identifiers. Agencies will need new workflows, compliance checklists, and possibly legal reviews before hitting "publish."
E-commerce platforms, news aggregators, and user-generated content sites face the toughest road. They'll need to balance user experience with regulatory compliance, all while ensuring their automated systems don't mislabel content or create false positives that anger users.
The clock is ticking. Once these rules are finalized—likely by late November or early December 2025—platforms will have limited time to build and deploy compliant systems. Global tech giants have resources; smaller Indian startups may struggle.
MeitY emphasized that the amendments aim to maintain "an open, safe, trusted, and accountable Internet" while balancing free expression and innovation. That balance is the hard part. Over-regulate, and you stifle the very innovation India wants to encourage. Under-regulate, and the deepfake crisis worsens.
India's proposal is bold, specific, and—if enforced—potentially effective. The 10% visibility threshold gives platforms clear targets. The permanent metadata requirement creates technical accountability. The safe harbor consequences provide teeth.
But success depends on execution. Can regulators build the enforcement infrastructure? Can platforms develop reliable detection tools? Can users learn to trust labels in a world where trust itself is under siege?
The next few months will be crucial. Public feedback closes November 6. Watch for revised drafts in December. And if you're on any platform with over five million Indian users, expect to see those AI labels sooner than you think.