An AI clip features a beaming Narendra Modi dancing on a stage to a Bollywood tune while wearing a stylish blazer and pants while the audience applauds. Resharing the video on X, the Indian prime minister said, “such creativity in peak poll season is truly a delight.”
In another video, Mamata Banerjee, Modi’s opponent, is seen dancing in a saree-like dress. The background music features clips from her speech denouncing people who left her party to join Modi’s. An inquiry has been opened by state police, who claim that the footage may “affect law and order.”
The disparate responses to AI-generated videos highlight the technology’s increasing use and abuse. As the most populous country prepares for a massive general election, this raises concerns for security and regulatory agencies.
Even those with a high level of computer literacy can occasionally be duped by simple AI videos. These videos feature almost perfect hand and shadow movements, making them highly convincing. The stakes are higher in a nation where many of its 1.4 billion citizens lack access to technology. Moreover, misinformation can readily incite sectarian strife, particularly during election seasons.
Misinformation is expected to pose a greater risk to India than infectious diseases or illicit economic activities in the next two years. According to a survey conducted by the World Economic Forum and released in January, this threat is becoming increasingly significant.
“India is already at a great risk of misinformation – with AI in picture, it can spread at the speed of 100X,” said New Delhi-based consultant Sagar Vishnoi, who is advising some political parties on AI use in India’s election.
“Elderly people, often not a tech savvy group, increasingly fall for fake narratives aided by AI videos. This could have serious consequences like triggering hatred against a community, caste or religion.”
The first national election to use artificial intelligence (AI) is scheduled for June 1, 2024, and will take place over a period of six weeks. Early instances were benign and limited to certain politicians customizing their campaigns with audio and video created via technology.
However, significant instances of misuse made news in April. Deepfakes featuring Bollywood actors disparaging Modi and fictitious footage of two of Modi’s closest advisers led to nine people being taken into custody.
DIFFICULT TO COUNTER
Last week, the Election Commission of India forbade political parties from using artificial intelligence (AI) to disseminate false information. It also published seven statutes and information technology regulations. These can result in jail terms of up to three years for offenses like forgery, enmity promotion, and pushing rumors.
According to a top national security officer in New Delhi, officials are worried that bogus news could spark violence. Such fake news can be easily produced using AI techniques, especially around elections, and it can be challenging to refute, according to the official.
“We don’t have a (adequate monitoring) capacity…the ever evolving AI environment is difficult to keep track of,” the person continued.
“We aren’t able to fully monitor social media, forget about controlling content,” stated a senior election official.
Since they were not authorized to speak with the media, they declined to be identified.
Elections in the US, Pakistan, Indonesia, and other countries around the world are increasingly using AI and deepfakes. The difficulties authorities encounter is depicted in the most recent videos to circulate around India.
An Indian IT ministry body has the authority to prohibit content for public order violations for years, either on its own initiative or in response to complaints. Hundreds of personnel have been dispatched by the nation’s poll watchdog and police to identify and remove objectionable content throughout this election.
Although Modi responded lightheartedly to his AI dancing video, saying, “I also enjoyed seeing myself dance,” the West Bengal state’s Kolkata city police opened an investigation into X user SoldierSaffron7 for distributing the Banerjee video.
On X, Dulal Saha Roy, a cybercrime officer from Kolkata, sent a typewritten warning threatening to remove the video or “be liable for strict penal action.”
The user told Reuters via X direct messaging, “I am not deleting that, no matter what happens,” not wanting to give their real name or phone number out of fear of being arrested. “They can’t trace (me).”
Election officers told Reuters that authorities can simply order social media sites to remove content. Additionally, they must scramble if the sites claim the posts don’t violate their corporate regulations.
VIGGLE VIDEOS
Viggle, a free website, was used to produce the dance movies of Modi and Banerjee, which have received 30 million and 1.1 million views on X, respectively. The website produces movies of the subject of the photo dancing or performing other real-life actions in a matter of minutes. It achieves this with the use of a photo and a few simple instructions explained in a tutorial.
Inquiries from Reuters were not answered by Banerjee’s office or co-founder Hang Chu of Viggle.
The 25-second Viggle video, which has been circulating online, shows Banerjee appearing in front of a burning hospital. She then uses a remote to blow it up, in addition to the two dancing AI films. This is a clip from The Dark Knight (2008) that was changed by AI and portrays the Joker, Batman’s enemy, spreading devastation.
Views on the video post totaled 420,000.
X sent an email warning to the user, which Reuters reviewed. According to the email, the West Bengal police believe it violates Indian IT regulations, but X has not taken any action since it “strongly believes in defending and respecting the voice of our users”.
“They are powerless over me. Through X direct chat, the user informed Reuters, “I didn’t take that (notice) seriously.”
Click here for more World news.