As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors. [1]
Generative AI capabilities allow creation of misleading content. Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse. [3] Chatbots could spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly. [4]
During the 2023 Argentine primary elections, Javier Milei's team distributed AI generated images including a fabricated image of his rival Sergio Massa and drew 3 million views. [5] The team also created an unofficial Instagram account entitled "AI for the Homeland." [5] Sergio Massa's team also distributed AI generated images and videos. [6] [7]
In the run up to the 2024 Bangladeshi general election, deepfake videos of female opposition politicians appeared. [8] Rumin Farhana was pictured in a bikini while Nipun Ray was shown in a swimming pool. [8]
In the 2024 French legislative election, deepfake videos appeared claiming: i) That they showed the family of Marine le Pen. In the videos, young women, supposedly Le Pen's nieces, are seen skiing, dancing and at the beach "while making fun of France’s racial minorities": However, the family members don't exist. On social media there were over 2 million views. [9] ii) In a video seen on social media, a deepfake video of a France24 broadcast appeared to report that the Ukrainian leadership had "tried to lure French president Emmanuel Macron to Ukraine to assassinate him and then blame his death on Russia". [10]
During the months before the December 2024 Ghanaian general election, a network of at least 171 fake accounts has been used to spam social media. [11] Posts have been used by a group identified as "@TheTPatriots" to promote the New Patriotic Party, although it is not known whether the two are connected. [11] All the networks' posts were "highly likely" to have been generated by ChatGPT and appear to be the "first secretly partisan network using AI to influence elections in Ghana". [11] The opposition National Democratic Congress was also criticized with its leader John Mahama being called a drunkard. [11]
In the 2024 Indian general election, politicians used deepfakes in their campaign materials. These deepfakes included politicians who had died prior to the election. Mathuvel Karunanidhi's party posted with his likeness even though he had died 2018. [12] [13] [14] A video The All-India Anna Dravidian Progressive Federation party posted showed an audio clip of Jayaram Jayalalithaa even though she had died in 2016. [15] [16] The Deepfakes Analysis Unit (DAU) is an open source platform created in March 2024 for the public to share misleading content and assess if it had been AI-generated. [17]
AI was also used to translate political speeches in real time. [13] This translating ability was widely used to reach more voters. [13] [14]
In the last weeks of the 2024 Irish general election a spoof election poster appeared in Dublin featuring "an AI-generated candidate with three arms". [18] The candidate is called Aidan Irwin, but no-one stood in the election with that name. A slogan on the poster says "put matters into artificial intelligence’s hands". [18] The convincing election poster shows a man that "has six fingers on one hand, three arms, and a distorted thumb". [18]
In May 2023, ahead of the 2023 New Zealand general election in October 2023, the New Zealand National Party published a "series of AI-generated political advertisements" on its Instagram account. [19] After confirming that the images were faked, a party spokesperson said that it was "an innovative way to drive our social media". [19]
AI has been used by the imprisoned ex-Prime Minister Imran Khan and his media team in the 2024 Pakistani general election: [20] i) An AI generated audio of his voice was added to a video clip and was broadcast at a virtual rally. [20] ii) An op-ed in The Economist written by Khan was later claimed by himself to have been written by AI which was later denied by his team. [20] The article was liked and shared on social media by thousands of users.
In the 2024 South African general election, there were several uses of AI content: [21] i) A deepfaked video of Joe Biden emerged on social media showing him saying that "The U.S. would place sanctions on SA and declare it an enemy state if the African National Congress (ANC) won". [21] ii) In a deepfake video, Donald Trump was shown endorsing the uMkhonto weSizwe party. It was posted to social media and was viewed more than 158,000 times. [21] iii) Less than 3 months before the elections, a deepfake video showed U.S. rapper Eminem endorsing the Economic Freedom Fighters party while criticizing the ANC. The deepfake was viewed on social media more than 173,000 times. [21]
In the 2022 South Korean presidential election, a committee for one presidential candidate Yoon Suk Yeol released an AI avatar 'Al Yoon Seok-yeol' that would campaign in places the candidate could not go. The other presidential candidate Lee Jae-myung introduced a chatbot that provided information about the candidate's pledges. [22]
Deepfakes were used to spread misinformation before the 2024 South Korean legislative election with one source reporting 129 deepfake violations of election laws within a two week period. [23]
Seoul hosted the 2024 Summit for Democracy, a virtual gathering of world leaders initiated by US President Joe Biden in 2021. [24] The focus of the summit was on digital threats to democracy including artificial intelligence and deepfakes. [25]
AI-generated content was used during the 2024 Taiwanese presidential election. Among the media were: i) A deepfake video of Chinese president Xi Jinping which showed him supporting the presidential elections. Created on social media, the video was "widely circulated" and often "accompanied by claims that Xi supported candidates from one of the two opposition parties". [26] ii) In a deepfake video U.S. congressman Rob Wittman is shown appearing to support Taiwan's Democratic Progressive Party. The video shows him saying that the U.S. would increase its military support, accelerating "all arms sales to Taiwan." It was shown on various social media platforms. [27]
The Centre for Emerging Technology and Security provided a report on the threat of AI to the 2024 UK general election. The reports' findings said that the impact of AI was limited but may damage the democratic system. [28]
In the run up to the UK 2024 general elections, AI-generated videos spread extensively on social media including: i) A deepfake video showed then PM Rishi Sunak claiming that he would "require 18-year-olds to be sent to active war zones in Gaza and Ukraine as part of their national service". The video had more than 400,00 views. [29] ii) A deepfake video showed PM Keir Starmer "swearing repeatedly at a staffer". Comments from the original poster included calling Starmer a "disgusting bully". The social media site showing the video refused to delete it despite requests. [30]
Entrepreneur Steve Endacott from the south of England created "AI Steve," [31] an AI avatar as the face of his campaign for member of parliament. [32]
Officials from the ODNI and FBI have stated that Russia, Iran, and China used generative artificial intelligence tools to create fake and divisive text, photos, video, and audio content to foster anti-Americanism and engage in covert influence campaigns. [33] The use of artificial intelligence was described as an accelerant rather than a revolutionary change to influence efforts. [34] Regulation of AI with regard to elections was unlikely to see a resolution for most of the 2024 United States general election season. [35] [36]
The campaign for the 2024 Republican nominee, [37] Donald Trump, has used deepfake videos of political opponents in campaign ads and fake images showing Trump with black supporters. [35] [38] In 2023, while he was still running for re-election, the presidential campaign of Joe Biden prepared a task force to respond to AI images and videos. [39]
A Democratic consultant working for Dean Phillips also admitted to using AI to generate a robocall which used Joe Biden's voice to discourage voter participation. [40]
Generative AI increased the efficiency with which political candidates were able to raise money by analyzing donor data and identifying possible donors and target audiences. [41]
The Commission on Elections (COMELEC) issued guidelines on the usage of AI, to be implemented starting from the 2025 Philippine general election including the parallel Bangsamoro Parliament election. It mandates candidate to disclose usage of AI in their campaign materials and prohibits the usage of the technology to spread misinformation against their rivals. [42] This is the first time the COMELEC has release guidelines on campaigning through social media. [43]
US states have attempted regulation of AI use in elections and campaigns with varying degrees of success. [44] The National Conference of State Legislatures has compiled a list of legislation regarding AI use by state as of 2024, some carrying both civil and criminal penalties. [45] Oregon Senate Bill 1571 requires that campaign communications in Oregon disclose the use of AI. [46] [47] [48] California has enacted legislation that makes using deepfakes to discredit political opponents illegal within sixty days of an election. [49] [50]
Midjourney, an AI image-generator, has started blocking users from creating fake images of the 2024 US Presidential candidates. [51] Research from the Center for Countering Digital Hate found that image generators such as Midjourney, ChatGPT Plus, DreamStudio, and Microsoft's Image Creator create images that constitute election disinformation in 41% of the test text prompts they tried. [51] OpenAI implemented policies to counter election misinformation such as adding digital credentials to image origin and a classifier to detect if images were AI generated. [52]
AI has begun to be used in election interference by foreign governments. [53] [54] [55] Governments thought to be using AI to interfere in external elections include Russia, Iran and China. [53] Russia was thought to be the most prolific nation targeting the 2024 presidential election with their influencing operations "spreading synthetic images, video, audio and text online", according to U.S intelligence officials. [53] Iran has reportedly generated fake social media posts stories and targeted "across the political spectrum on polarizing issues during the presidential election". [53] The Chinese government has used "broader influence operations" that aim to make a global image and "amplify divisive topics in the U.S. such as drug use, immigration, and abortion". [53] For example, Spamouflage has increasingly used generative AI for influence operations. [56]
Outside of the US elections, a deepfake video of Moldova’s pro-Western president Maia Sandu shows her "throwing her support behind a political party friendly to Russia." [54] Officials in Moldova "believe the Russian government is behind the activity". [54] Slovakia's liberal party leader had audio clips faked which discussed "vote rigging and raising the price of beer". [54] The Chinese government has used AI to stir concerns about US interference in Taiwan. [54] A fake clip seen on social media showed a fake video of the vice chairman of the U.S. House Armed Services Committee promising "stronger U.S. military support for Taiwan if the incumbent party’s candidates were elected in January". [54]
As the use of AI and its associated tools in political campaigning and messaging increases, many ethical concerns have been raised. [57] Campaigns have used AI in a number of ways, including speech writing, fundraising, voter behaviour prediction, fake robocalls and the generation of fake news. [57] At the moment there are no US federal rules when it comes to using AI in campaigning and so its use can undermine public trust. [57] Yet according to one expert: "A lot of the questions we're asking about AI are the same questions we've asked about rhetoric and persuasion for thousands of years." [57]
As more insight into how AI is used becomes ever greater, concerns have become much broader than just the generating of misinformation or fake news. [58] Its use by politicians and political parties for "purposes that are not overtly malicious" can also raise ethical worries. [58] For instance, the use of 'softfakes' have become more common. [58] These can be images, videos or audio clips that have been edited, often by campaign teams, "to make a political candidate seem more appealing." [58] An example can be found in Indonesia's presidential election where the winning candidate created and promoted cartoonish avatars so as to rebrand himself. [58]
How citizens come by information has been increasingly impacted by AI, especially through online platforms and social media. [59] These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security". [59] As the frequency of AI use in political campaigning becomes common, together with globalization, more 'universalized' content can be used so that territorial boundaries matter less. [59] While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states. [59]
Disinformation is misleading content deliberately spread to deceive people, or to secure economic or political gain and which may cause public harm. Disinformation is an orchestrated adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals. Disinformation is implemented through attacks that "weaponize multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value judgements—to exploit and amplify culture wars and other identity-driven controversies."
Misinformation is incorrect or misleading information. Misinformation and disinformation are not interchangeable terms: Misinformation can exist with or without specific malicious intent whereas disinformation is distinct in that the information is deliberately deceptive and propagated. Misinformation can include inaccurate, incomplete, misleading, or false information as well as selective or half-truths. In January 2024, the World Economic Forum identified misinformation and disinformation, propagated by both internal and external interests, to "widen societal and political divides" as the most severe global risks within the next two years.
A media prank is a type of media event, perpetrated by staged speeches, activities, or press releases, designed to trick legitimate journalists into publishing erroneous or misleading articles. The term may also refer to such stories if planted by fake journalists, as well as the false story thereby published. A media prank is a form of culture jamming generally done as performance art or a practical joke for purposes of a humorous critique of mass media.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Fake nude photography is the creation of nude photographs designed to appear as genuine nudes of an individual. The motivations for the creation of these modified photographs include curiosity, sexual gratification, the stigmatization or embarrassment of the subject, and commercial gain, such as through the sale of the photographs via pornographic websites. Fakes can be created using image editing software or through machine learning. Fake images created using the latter method are called deepfakes.
Social media was used extensively in the 2020 United States presidential election. Both incumbent president Donald Trump and Democratic Party nominee Joe Biden's campaigns employed digital-first advertising strategies, prioritizing digital advertising over print advertising in the wake of the pandemic. Trump had previously utilized his Twitter account to reach his voters and make announcements, both during and after the 2016 election. The Democratic Party nominee Joe Biden also made use of social media networks to express his views and opinions on important events such as the Trump administration's response to the COVID-19 pandemic, the protests following the murder of George Floyd, and the controversial appointment of Amy Coney Barrett to the Supreme Court.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
Audio deepfake technology, also referred to as voice cloning or deepfake audio, is an application of artificial intelligence designed to generate speech that convincingly mimics specific individuals, often synthesizing phrases or sentences they have never spoken. Initially developed with the intent to enhance various aspects of human life, it has practical applications such as generating audiobooks and assisting individuals who have lost their voices due to medical conditions. Additionally, it has commercial uses, including the creation of personalized digital assistants, natural-sounding text-to-speech systems, and advanced speech translation services.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
TrumporBiden2024 was a Twitch channel that featured artificial intelligence (AI) versions of then former U.S. president Donald Trump and then president Joe Biden engaging in an endless, profane, comedic political debate. After launching, the channel gained the attention of news media. The channel was presented as satire.
In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X. Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.
Graphika is an American social network analysis company known for tracking online disinformation. It was established in 2013.
Spamouflage, Dragonbridge, Spamouflage Dragon, Storm 1376, or Taizi Flood is an online propaganda and disinformation operation that uses a network of social media accounts to make posts in favor of the Chinese government and harass dissidents and journalists overseas since 2017. Beginning in the early 2020s, Spamouflage accounts also began making posts about American and Taiwanese politics. It is widely believed that the Chinese government, particularly the Ministry of Public Security, is behind the network. Spamouflage has increasingly used generative artificial intelligence for influence operations. The campaign has largely failed to receive views from real users, although it has attracted some organic engagement using new tactics.
The Russian government has interfered in the 2024 United States elections through disinformation and propaganda campaigns aimed at damaging Joe Biden, Kamala Harris and other Democrats while boosting the candidacy of Donald Trump and other candidates who support isolationism and undercutting support for Ukraine aid and NATO. Russia's efforts represent the most active threat of foreign interference in the 2024 United States elections and follows Russia's previous pattern of spreading disinformation through fake social media accounts and right-wing YouTube channels in order to divide American society and foster anti-Americanism. On September 4, 2024, the U.S. Department of Justice indicted members of Tenet Media for having received $9.7 million as part of a covert Russian influence operation to co-opt American right-wing influencers to espouse pro-Russian content and conspiracy theories. Many of the followers of the related influencers were encouraged to steal ballots, intimidate voters, and remove or destroy ballot drop offs in the weeks leading up to the election.
The Chinese government has interfered in the 2024 United States elections through propaganda and disinformation campaigns, primarily linked to its Spamouflage influence operation. The efforts come amidst larger foreign interference in the 2024 United States elections.
Several nations have interfered in the 2024 United States elections. U.S. intelligence agencies have identified China, Iran, and Russia as the most pressing concerns, with Russia being the most active threat.
Artificial intelligence (AI) has been developed rapidly in recent years, and has been used by groups in the 2024 United States presidential election, as well as foreign groups such as China, Russia and Iran. There have also been efforts to control the use of generative artificial intelligence, such as those in California.
Algorithmic party platforms are a recent development in political campaigning where artificial intelligence (AI) and machine learning are used to shape and adjust party messaging dynamically. Unlike traditional platforms that are drafted well before an election, these platforms adapt based on real-time data such as polling results, voter sentiment, and trends on social media. This allows campaigns to remain responsive to emerging issues throughout the election cycle.
using AI to influence elections in Ghana
an AI-generated candidate with three arms
an entrepreneur from the south of England