Breaking news:
Indian Sailor Dies After Drone Boat Strikes Oil Tanker Near Oman | Khamenei’s Wife Dies From Injuries Two Days After Iran Leader’s Killing | “Got Him Before He Got Me”: Trump on Khamenei’s Death in US-Israel Strikes
Logo

India Cracks Down on Deepfakes: Social Media Told to Label AI Content and Remove Harmful Posts in 3 Hours

The Centre has rolled out stricter digital rules requiring platforms to tag AI-generated content, verify user uploads, and take down harmful material at record speed 

10-02-2026
image
   

The Indian government has introduced tighter regulations to curb the spread of AI-generated misinformation and deepfake content, placing new obligations on social media companies to act swiftly and responsibly.

Under updated Information Technology Rules, digital platforms must clearly mark content that has been created or altered using artificial intelligence. Users will also be required to declare whether their uploaded material involves AI tools, while platforms will be responsible for verifying these claims through technical checks.

One of the most significant changes is the sharp reduction in takedown timelines. In specific high-risk cases, platforms must now remove illegal or harmful content within just three hours — a major reduction from the earlier 36-hour window. Other response deadlines have also been shortened to speed up action against misleading or unlawful posts.

The new framework, formally notified on February 10 and set to take effect from February 20, expands oversight to cover what the government terms “synthetically generated information.” This includes AI-created or AI-edited videos, images, and audio that may appear authentic and mislead viewers.

To improve transparency and traceability, platforms will be required to embed technical identifiers or metadata in AI-generated material wherever possible. These markers are intended to help track the origin of synthetic content and prevent tampering or removal of disclosure labels.

The rules also place accountability on platforms to ensure that AI-driven content is not misused for criminal purposes such as impersonation, fraud, harassment, child exploitation, or the promotion of illegal activities involving weapons or explosives.

At the same time, the government has clarified that platforms complying with these regulations will continue to receive legal protection under existing safe harbour provisions, even when they use automated systems to detect and remove synthetic content.

Overall, the move reflects growing concern over the misuse of deepfakes and AI-powered media, as authorities push for greater transparency, faster enforcement, and stronger safeguards in India’s digital ecosystem.

Image

India Beat England by 7 Runs in Thrilling ICC Men’s T20 World Cup Semi-Fina

India held their nerve in a high-scoring semi-final in Mumbai to edge past England by seven runs and

Read More
Image

IAF Su-30MKI Fighter Jet Loses Radar Contact Shortly After Takeoff from Assam

A search operation is underway after an Indian Air Force Su-30MKI combat aircraft disappeared from r

Read More
Image

US Submarine Sinks Iranian Warship IRIS Dena in Indian Ocean; Dozens Feared D

An Iranian naval frigate that had recently taken part in an international exercise in India was dest

Read More