ECI warning on AI deepfakes highlights rising use during polls

Advisory flags urgent need for technology checks, voter awareness

Politics

November 3, 2025

/ By / New Delhi

ECI warning on AI deepfakes highlights rising use during polls

This deepfake video of Ranveer Singh from his Varanasi visit was circulated by a social media user with the X handle @sujataindia1st

Election Commission’s advisory warning political parties and digital platforms against AI-generated deepfakes ahead of polls, underlines the greater abuse of AI during elections.

Rate this post

Last week, the Election Commission of India (ECI) issued a detailed advisory to all national and state-recognised political parties on the use of Artificial Intelligence (AI)-generated and synthetic content in election campaigns.

The move comes amid growing concern that deepfakes, hyper-realistic manipulated videos, audio and images, are being used to mislead voters and distort democratic discourse, especially through social media during election seasons.

The advisory directs parties to clearly label any AI-generated material in their campaign content. For videos or images, the label must cover at least 10 pc of the visual area. For audio clips, it should appear within the first 10 pc of the duration.

Also Read: Election Commission in the dock over Bihar Elections 2025

Political parties are also required to maintain detailed internal logs of all AI-generated or digitally enhanced materials, noting the source, timestamp and creator identity.

If any misleading or unauthenticated synthetic content surfaces from official channels, it must be taken down within three hours of discovery or complaint. This three-hour limit mirrors a similar mandate issued by the ECI during the 2024 general elections to tackle unverified content, which saw heightened misuse of generative AI tools.

According to Internet and Mobile Association of India (IAMAI) data, over 820 million Indians were online in 2024, and nearly 500 million used social media actively. This reach made India particularly vulnerable during the 2024 general elections, when several manipulated videos resembling political leaders circulated widely across platforms like WhatsApp, X and Instagram.

“AI-generated deepfakes and synthetic content constitute a substantial and growing threat to the fairness of elections in India, with evidence from the 2024 general election showing that over 75 pc of Indian voters encountered political deepfakes. This eroded voter trust in digital election content and undermined the level playing field for candidates,” Neela Ganguly, a political analyst based in Chennai, tells Media India Group.

The ECI’s advisory warns that deepfakes can “impersonate or misrepresent political candidates, create false associations, amplify religious or caste tensions and circulate divisive narratives,” all of which directly challenge the constitutional principle of free and fair elections.

The ECI currently acts under the authority of Article 324 of the Constitution and Section 126 of the Representation of the People Act, 1951, to ensure fair campaigning and regulate content during election periods. However, detection of AI-generated material presents a technological hurdle.

“Hyper-realistic AI-generated audio and video can be nearly indistinguishable from genuine recordings. Even advanced detection tools show gaps in reliability and speed. Regulators often must depend on self-reporting by political parties or alerts from fact-checkers, rather than proactive enforcement,” Ganguly added.

The rapid spread of local-language content compounds the challenge. As per Ministry of Electronics and Information Technology (MeitY) data, India had more than 850 million smartphone users in 2025, many engaging through short-form video apps where synthetic media can go viral within minutes.

Strategies under discussion include partnerships with IIT research labs and AI verification startups to build a unified deepfake detection repository that can be deployed during elections. The advisory also emphasises periodic digital audits of party accounts and independent monitoring cells to flag questionable content before it gains traction.

For social media intermediaries, the advisory echoes existing obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Platforms are now required to implement automated systems capable of identifying and labelling AI-generated or synthetically modified posts uploaded by users.

Failure to comply may lead to suspension of their “safe harbour” protections, the legal immunity granted for third-party content. Platforms must also display conspicuous AI disclaimers on suspect content and cooperate with the ECI or law enforcement during takedown or user verification requests.

Political parties have been given similar responsibilities. They must ensure that all campaign material, whether print, broadcast, or digital, clearly declares AI involvement. The onus is on digital teams and campaign managers to verify content before dissemination.

Any external agency or consultant producing such material must be bound by written compliance contracts.

“Both political parties and social media companies play pivotal roles in curbing the misuse of AI-generated content during elections. Transparency, accountability, and a rapid response mechanism are essential to protect voter trust,” says the political analyst.

Public awareness and digital literacy

Regulation alone may not be enough. The ECI’s advisory also calls for voter education campaigns to build awareness of deepfakes and digital manipulation. This includes collaboration with the Press Information Bureau’s fact-check unit, Doordarshan and community radio networks to explain how voters can verify information sources and identify suspicious content.

According to the ECI’s field survey with the Centre for Media Studies (CMS), only 38 pc of rural voters in 2024 could correctly identify a doctored video when shown one, compared with 61 pc in urban areas. These findings underscore the digital literacy gap that synthetic media exploits.

“Education campaigns to raise voter awareness about deepfakes and digital literacy are essential complements to regulatory action. Rapid advancement in generative AI tools means such content will only get harder to detect. Continuous monitoring, updated laws and citizen awareness need to evolve together,” says Ganguly.

Experts note that the ECI’s advisory, though non-binding, is a crucial policy step before the upcoming state elections in Bihar, Assam, Kerala, Tamil Nadu and West Bengal where digital campaigning is expected to intensify.

MeitY’s own data shows that generative AI use in India grew by 28 pc between 2023 and 2025, outpacing global averages, largely due to local-language tools and increased AI adoption by political consultancies.

The Commission has indicated that future directives could involve specific penalties or even restrictions on repeat offenders, possibly drawing from sections of the IPC related to impersonation and misinformation.

But whether mere warnings can pull parties desperate to win the elections remains to be seen and the proof of the pudding may also lie in how fast the ECI can respond to complaints against misuse of AI. Its recent track record in addressing complaints has been less than impressive.

YOU MAY ALSO LIKE

0 COMMENTS

Leave a Reply

Your email address will not be published. Required fields are marked *