Advertisement

Updated April 3rd, 2024 at 20:12 IST

AI threat during elections: Voice morphing, deepfakes and more

AI-driven deception and distortion, viral deepfakes will fuel election interference and disinformation campaigns

Reported by: Business Desk
Elections and Deepfakes
Elections and Deepfakes | Image:Republic Business
Advertisement

Artificial Intelligence (AI) is likely to cause electoral harm with the algorithm generating disinformation campaigns through fake news and propaganda at scale, as per experts.

Emerging technologies like deepfakes pose significant threats, including election interference and the spread of misinformation, as per a recent Zscaler report.

These instances are live in action, with AI having already been implicated in misleading tactics during US elections.

Recently, the technology was used for generating robocalls impersonating US President Joe Biden to discourage voter turnout.

These campaigns can target specific demographics or regions with tailored messages designed to manipulate public opinion, according to AI expert and TechWhisperer co-founder Jaspreet Bindra.

Microtargeting

AI algorithms can analyse vast amounts of data to create highly specific profiles of individual voters, which can in turn be deployed for targeting people through personalised messages, advertisements, and misinformation to their vulnerabilities or biases, Bindra explained.

“This is what happened with Cambridge Analytica and the 2016 elections in the US,” he added.

These are not confined to geographies, as the use of AI in election manipulation may not be limited to domestic actors, Zscaler suggested.

“State-sponsored entities could also exploit AI to create confusion and undermine trust in the electoral process,” it added. 

Notably, US intelligence agencies have warned that Russia and China will likely leverage AI as part of attempts to influence US elections, as per reports to the Senate Intelligence Committee.

Deep fake technology

The misuse of artificial intelligence for generating deepfakes, or seemingly real speech and visuals of persons ranging from celebrities to politicians has been an emerging concern.

Actress Rashmika Mandanna became a victim of deepfakes on social media last year, and in the West, manipulated images of popular singer Taylor Swift started doing the rounds causing microblogging site X to restrict searches related to her name.

In January this year, a discerning trend of dead politicians brought to life through deepfakes gripped the electoral reality.

Advertisement

DMK Lok Sabha candidate TR Baalu got an AI-generated video of former Chief Minister of Tamil Nadu and late actor-politician M Karunanidhi, who had passed away in 2018.

In the video, Karunanidhi’s deepfake was seen promoting Baalu’s autobiography, going on to praise the leadership of his son and Tamil Nadu Chief Minister MK Stalin.

This was the third attempt at generating an AI deepfake of the late politician for public events in a span of 6 months.

Senthil Nayagam, founder of the AI media tech firm Muonium which was behind the Karunanidhi deepfake used publicly available data of the late actor-politician to train a speech model and recreated the 1990s likeness of the leader when he was much younger, as per an Al Jazeera report.

Karunanidhi gave his last public interview in 2016, after which he became frail and his voice was coarse. 

“This is perhaps the greatest threat, where AI-generated deepfake videos and audio can be used to create convincing but entirely fabricated content, such as speeches or interviews, that can be used to discredit candidates or spread false information,” Bindra noted, in context to the Joe Biden deepfake.

Social media manipulation

AI is capable of manipulating algorithms to amplify divisive content, promote certain viewpoints, or suppress opposing voices for influencing public perception, Bindra said.

This will further be propelled by AI-powered bot networks, that will create the illusion of grassroots support and spread misinformation en masse - making it difficult to distinguish between genuine human activity and automated manipulation, he added.

Notably, 20 companies have signed the Tech Accord to Combat Deceptive Use of AI at the Munich Security Conference on 16 February, which include Google, IBM, Amazon, Microsoft, Facebook and Instagram’s parent company Meta, OpenAI, and X (formerly Twitter).

Advertisement

Published April 3rd, 2024 at 20:11 IST

Your Voice. Now Direct.

Send us your views, we’ll publish them. This section is moderated.

Advertisement
Advertisement

Trending Quicks

Advertisement
Advertisement
Advertisement
Whatsapp logo