As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors. [1]
Generative AI capabilities allow creation of misleading content. Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse. [3] Chatbots could spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly. [4]
During the 2023 Argentine primary elections, Javier Milei's team distributed AI generated images including a fabricated image of his rival Sergio Massa and drew 3 million views. [5] The team also created an unofficial Instagram account entitled "AI for the Homeland." [5] Sergio Massa's team also distributed AI generated images and videos. [6] [7]
In the run up to the 2024 Bangladeshi general election, deepfake videos of female opposition politicians appeared. [8] Rumin Farhana was pictured in a bikini while Nipun Ray was shown in a swimming pool. [8]
In the 2024 French legislative election, deepfake videos appeared claiming: i) That they showed the family of Marine le Pen. In the videos, young women, supposedly Le Pen's nieces, are seen skiing, dancing and at the beach "while making fun of France’s racial minorities": However, the family members don't exist. On social media there were over 2 million views. [9] ii) In a video seen on social media, a deepfake video of a France24 broadcast appeared to report that the Ukrainian leadership had "tried to lure French president Emmanuel Macron to Ukraine to assassinate him and then blame his death on Russia". [10]
During the months before the December 2024 Ghanaian general election, a network of at least 171 fake accounts has been used to spam social media. [11] Posts have been used by a group identified as "@TheTPatriots" to promote the New Patriotic Party, although it is not known whether the two are connected. [11] All the networks' posts were "highly likely" to have been generated by ChatGPT and appear to be the "first secretly partisan network using AI to influence elections in Ghana". [11] The opposition National Democratic Congress was also criticized with its leader John Mahama being called a drunkard. [11]
In the 2024 Indian general election, politicians used deepfakes in their campaign materials. These deepfakes included politicians who had died prior to the election. Mathuvel Karunanidhi's party posted with his likeness even though he had died 2018. [12] [13] [14] A video The All-India Anna Dravidian Progressive Federation party posted showed an audio clip of Jayaram Jayalalithaa even though she had died in 2016. [15] [16] The Deepfakes Analysis Unit (DAU) is an open source platform created in March 2024 for the public to share misleading content and assess if it had been AI-generated. [17]
AI was also used to translate political speeches in real time. [13] This translating ability was widely used to reach more voters. [13] [14]
In the last weeks of the 2024 Irish general election a spoof election poster appeared in Dublin featuring "an AI-generated candidate with three arms". [18] The candidate is called Aidan Irwin, but no-one stood in the election with that name. A slogan on the poster says "put matters into artificial intelligence’s hands". [18] The convincing election poster shows a man that "has six fingers on one hand, three arms, and a distorted thumb". [18]
In May 2023, ahead of the 2023 New Zealand general election in October 2023, the New Zealand National Party published a "series of AI-generated political advertisements" on its Instagram account. [19] After confirming that the images were faked, a party spokesperson said that it was "an innovative way to drive our social media". [19]
AI has been used by the imprisoned ex-Prime Minister Imran Khan and his media team in the 2024 Pakistani general election: [20] i) An AI generated audio of his voice was added to a video clip and was broadcast at a virtual rally. [20] ii) An op-ed in The Economist written by Khan was later claimed by himself to have been written by AI which was later denied by his team. [20] The article was liked and shared on social media by thousands of users.
In the 2024 South African general election, there were several uses of AI content: [21] i) A deepfaked video of Joe Biden emerged on social media showing him saying that "The U.S. would place sanctions on SA and declare it an enemy state if the African National Congress (ANC) won". [21] ii) In a deepfake video, Donald Trump was shown endorsing the uMkhonto weSizwe party. It was posted to social media and was viewed more than 158,000 times. [21] iii) Less than 3 months before the elections, a deepfake video showed U.S. rapper Eminem endorsing the Economic Freedom Fighters party while criticizing the ANC. The deepfake was viewed on social media more than 173,000 times. [21]
In the 2022 South Korean presidential election, a committee for one presidential candidate Yoon Suk Yeol released an AI avatar 'Al Yoon Seok-yeol' that would campaign in places the candidate could not go. The other presidential candidate Lee Jae-myung introduced a chatbot that provided information about the candidate's pledges. [22]
Deepfakes were used to spread misinformation before the 2024 South Korean legislative election with one source reporting 129 deepfake violations of election laws within a two week period. [23]
Seoul hosted the 2024 Summit for Democracy, a virtual gathering of world leaders initiated by US President Joe Biden in 2021. [24] The focus of the summit was on digital threats to democracy including artificial intelligence and deepfakes. [25]
AI-generated content was used during the 2024 Taiwanese presidential election. Among the media were: i) A deepfake video of Chinese president Xi Jinping which showed him supporting the presidential elections. Created on social media, the video was "widely circulated" and often "accompanied by claims that Xi supported candidates from one of the two opposition parties". [26] ii) In a deepfake video U.S. congressman Rob Wittman is shown appearing to support Taiwan's Democratic Progressive Party. The video shows him saying that the U.S. would increase its military support, accelerating "all arms sales to Taiwan." It was shown on various social media platforms. [27]
The Centre for Emerging Technology and Security provided a report on the threat of AI to the 2024 UK general election. The reports' findings said that the impact of AI was limited but may damage the democratic system. [28]
In the run up to the UK 2024 general elections, AI-generated videos spread extensively on social media including: i) A deepfake video showed then PM Rishi Sunak claiming that he would "require 18-year-olds to be sent to active war zones in Gaza and Ukraine as part of their national service". The video had more than 400,00 views. [29] ii) A deepfake video showed PM Keir Starmer "swearing repeatedly at a staffer". Comments from the original poster included calling Starmer a "disgusting bully". The social media site showing the video refused to delete it despite requests. [30]
Entrepreneur Steve Endacott from the south of England created "AI Steve," [31] an AI avatar as the face of his campaign for member of parliament. [32]
Officials from the ODNI and FBI have stated that Russia, Iran, and China used generative artificial intelligence tools to create fake and divisive text, photos, video, and audio content to foster anti-Americanism and engage in covert influence campaigns. [33] The use of artificial intelligence was described as an accelerant rather than a revolutionary change to influence efforts. [34] Regulation of AI with regard to elections was unlikely to see a resolution for most of the 2024 United States general election season. [35] [36]
The campaign for the 2024 Republican nominee, [37] Donald Trump, has used deepfake videos of political opponents in campaign ads and fake images showing Trump with black supporters. [35] [38] In 2023, while he was still running for re-election, the presidential campaign of Joe Biden prepared a task force to respond to AI images and videos. [39]
A Democratic consultant working for Dean Phillips also admitted to using AI to generate a robocall which used Joe Biden's voice to discourage voter participation. [40]
Generative AI increased the efficiency with which political candidates were able to raise money by analyzing donor data and identifying possible donors and target audiences. [41]
The Commission on Elections (COMELEC) issued guidelines on the usage of AI, to be implemented starting from the 2025 Philippine general election including the parallel Bangsamoro Parliament election. It mandates candidate to disclose usage of AI in their campaign materials and prohibits the usage of the technology to spread misinformation against their rivals. [42] This is the first time the COMELEC has release guidelines on campaigning through social media. [43]
US states have attempted regulation of AI use in elections and campaigns with varying degrees of success. [44] The National Conference of State Legislatures has compiled a list of legislation regarding AI use by state as of 2024, some carrying both civil and criminal penalties. [45] Oregon Senate Bill 1571 requires that campaign communications in Oregon disclose the use of AI. [46] [47] [48] California has enacted legislation that makes using deepfakes to discredit political opponents illegal within sixty days of an election. [49] [50]
Midjourney, an AI image-generator, has started blocking users from creating fake images of the 2024 US Presidential candidates. [51] Research from the Center for Countering Digital Hate found that image generators such as Midjourney, ChatGPT Plus, DreamStudio, and Microsoft's Image Creator create images that constitute election disinformation in 41% of the test text prompts they tried. [51] OpenAI implemented policies to counter election misinformation such as adding digital credentials to image origin and a classifier to detect if images were AI generated. [52]
AI has begun to be used in election interference by foreign governments. [53] [54] [55] Governments thought to be using AI to interfere in external elections include Russia, Iran and China. [53] Russia was thought to be the most prolific nation targeting the 2024 presidential election with their influencing operations "spreading synthetic images, video, audio and text online", according to U.S intelligence officials. [53] Iran has reportedly generated fake social media posts stories and targeted "across the political spectrum on polarizing issues during the presidential election". [53] The Chinese government has used "broader influence operations" that aim to make a global image and "amplify divisive topics in the U.S. such as drug use, immigration, and abortion". [53] For example, Spamouflage has increasingly used generative AI for influence operations. [56]
Outside of the US elections, a deepfake video of Moldova’s pro-Western president Maia Sandu shows her "throwing her support behind a political party friendly to Russia." [54] Officials in Moldova "believe the Russian government is behind the activity". [54] Slovakia's liberal party leader had audio clips faked which discussed "vote rigging and raising the price of beer". [54] The Chinese government has used AI to stir concerns about US interference in Taiwan. [54] A fake clip seen on social media showed a fake video of the vice chairman of the U.S. House Armed Services Committee promising "stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January". [54]
As the use of AI and its associated tools in political campaigning and messaging increases, many ethical concerns have been raised. [57] Campaigns have used AI in a number of ways, including speech writing, fundraising, voter behaviour prediction, fake robocalls and the generation of fake news. [57] At the moment there are no US federal rules when it comes to using AI in campaigning and so its use can undermine public trust. [57] Yet according to one expert: "A lot of the questions we're asking about AI are the same questions we've asked about rhetoric and persuasion for thousands of years." [57]
As more insight into how AI is used becomes ever greater, concerns have become much broader than just the generating of misinformation or fake news. [58] Its use by politicians and political parties for "purposes that are not overtly malicious" can also raise ethical worries. [58] For instance, the use of 'softfakes' have become more common. [58] These can be images, videos or audio clips that have been edited, often by campaign teams, "to make a political candidate seem more appealing." [58] An example can be found in Indonesia's presidential election where the winning candidate created and promoted cartoonish avatars so as to rebrand himself. [58]
How citizens come by information has been increasingly impacted by AI, especially through online platforms and social media. [59] These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security". [59] As the frequency of AI use in political campaigning becomes common, together with globalization, more 'universalized' content can be used so that territorial boundaries matter less. [59] While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states. [59]
using AI to influence elections in Ghana
an AI-generated candidate with three arms
an entrepreneur from the south of England