The examples and perspective in this article may not represent a worldwide view of the subject.(October 2024) |
As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors. [1]
Generative AI capabilities allow creation of misleading content. Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse. [2] Chatbots could spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly. [3]
During the 2023 Argentine primary elections, Javier Milei's team distributed AI generated images including a fabricated image of his rival Sergio Massa and drew 3 million views. [4] The team also created an unofficial Instagram account entitled "AI for the Homeland." [4] Sergio Massa's team also distributed AI generated images and videos. [5] [6]
In the run up to the 2024 Bangladesh elections, a strict muslim country, deepfake videos of female opposition politicians appeared. [7] Rumin Farhana was pictured in a bikini while Nipun Ray was shown in a swimming pool. [7]
In the 2024 Indian general election, politicians used deepfakes in their campaign materials. These deepfakes included politicians who had died prior to the election. Mathuvel Karunanidhi's party posted with his likeness even though he had died 2018. [8] [9] [10] A video The All-India Anna Dravidian Progressive Federation party posted showed an audio clip of Jayaram Jayalalithaa even though she had died in 2016. [11] [12] The Deepfakes Analysis Unit (DAU) is an open source platform created in March 2024 for the public to share misleading content and assess if it had been AI-generated. [13]
AI was also used to translate political speeches in real time. [9] This translating ability was widely used to reach more voters. [9] [10]
In May 2023, ahead of the New Zealand October 2023 general election, the New Zealand National Party published a "series of AI-generated political advertisements" on its Instagram account. [14] After confirming that the images were faked, a party spokesperson said that it was "an innovative way to drive our social media". [14]
AI has been used by the imprisoned ex-Prime Minister Imran Khan and his media team in the Pakistani elections of 2024: [15] i) An AI generated audio of his voice was added to a video clip and was broadcast at a virtual rally. [15] ii) An op-ed in the The Economist written by Khan was later claimed by himself to have been written by AI which was later denied by his team. [15] The article was liked and shared on social media by thousands of users.
In the South African general elections, there were several uses of AI content: [16] i) A deepfaked video of Joe Biden emerged on social media showing him saying that "The U.S. would place sanctions on SA and declare it an enemy state if the African National Congress (ANC) won". [16] ii) In a deepfake video, Donald Trump was shown endorsing the uMkhonto weSizwe party. It was posted to social media and was viewed more than 158,000 times. [16] iii) Less than 3 months before the elections, a deepfake video showed U.S. rapper Eminem endorsing the Economic Freedom Fighters party while criticizing the ANC. The deepfake was viewed on social media more than 173,000 times. [16]
A committee for one presidential candidate Yoon Suk Yeol released an AI avatar 'Al Yoon Seok-yeol' that would campaign in places the candidate could not go. The other presidential candidate Lee Jae-myung introduced a chatbot that provided information about the candidate's pledges. [17]
Deepfakes were used to spread misinformation before South Korea's parliamentary elections with one source reporting 129 deepfake violations of election laws within a two week period. [18] Seoul hosted the 2024 Summit for Democracy, a virtual gathering of world leaders initiated by US President Joe Biden in 2021. [19] The focus of the summit was on digital threats to democracy including artificial intelligence and deepfakes. [20]
Steve Endacott created "AI Steve," an AI avatar as the face of his campaign for member of parliament. [21] The Centre for Emerging Technology and Security provided a report on the threat of AI to the 2024 UK general elections. The report findings said that AI impact was limited but may damage the democratic system. [22]
Regulation of AI with regard to elections was unlikely to see a resolution for most of the 2024 United States general election season. [23] [24] The campaign for the 2024 Republican nominee, [25] Donald Trump, has used deepfake videos of political opponents in campaign ads and fake images showing Trump with black supporters. [23] [26] In 2023, while he was still running for re-election, the presidential campaign of Joe Biden prepared a task force to respond to AI images and videos. [27]
A Democratic consultant working for Dean Phillips also admitted to using AI to generate a robocall which used Joe Biden's voice to discourage voter participation. [28]
Generative AI increased the efficiency with which political candidates were able to raise money by analyzing donor data and identifying possible donors and target audiences. [29] [30]
The Commission on Elections (COMELEC) issued guidelines on the usage of AI, to be implemented starting from the 2025 Philippine general election including the parallel Bangsamoro Parliament election. It mandates candidate to disclose usage of AI in their campaign materials and prohibits the usage of the technology to spread misinformation against their rivals. [31] This is the first time the COMELEC has release guidelines on campaigning through social media. [32]
US states have attempted regulation of AI use in elections and campaigns with varying degrees of success. [33] The National Conference of State Legislatures has compiled a list of legislation regarding AI use by state as of 2024, some carrying both civil and criminal penalties. [34] Oregon Senate Bill 1571 requires that campaign communications in Oregon disclose the use of AI. [35] [36] [37] California has enacted legislation that makes using deepfakes to discredit political opponents illegal within sixty days of an election. [38] [39]
Midjourney, an AI image-generator, has started blocking users from creating fake images of the 2024 US Presidential candidates. [40] Research from the Center for Countering Digital Hate found that image generators such as Midjourney, ChatGPT Plus, DreamStudio, and Microsoft's Image Creator create images that constitute election disinformation in 41% of the test text prompts they tried. [40] OpenAI implemented policies to counter election misinformation such as adding digital credentials to image origin and a classifier to detect if images were AI generated. [41]
As the use of AI and its associated tools in political campaigning and messaging increases, many ethical concerns have been raised. [42] Campaigns have used AI in a number of ways, including speech writing, fundraising, voter behaviour prediction, fake robocalls and the generation of fake news. [42] At the moment there are no US federal rules when it comes to using AI in campaigning and so its use can undermine public trust. [42] Yet according to one expert: "A lot of the questions we're asking about AI are the same questions we've asked about rhetoric and persuasion for thousands of years." [42]
As more insight into how AI is used becomes ever greater, concerns have become much broader than just the generating of misinformation or fake news. [43] Its use by politicians and political parties for "purposes that are not overtly malicious" can also raise ethical worries. [43] For instance, the use of 'softfakes' have become more common. [43] These can be images, videos or audio clips that have been edited, often by campaign teams, "to make a political candidate seem more appealing." [43] An example can be found in Indonesia's presidential election where the winning candidate created and promoted cartoonish avatars so as to rebrand himself. [43]
How citizens come by information has been increasingly impacted by AI, especially through online platforms and social media. [44] These platforms are part of complex and opaque systems which can result in a "significant impact on freedom of expression", with the generalisation of AI in campaigns also creating huge pressures on "voters’ mental security". [44] As the frequency of AI use in political campaigning becomes common, together with globalization, more 'universalized' content can be used so that territorial boundaries matter less. [44] While AI collides with the reasoning processes of people, the creation of "dangerous behaviours" can happen which disrupt important levels of society and nation states. [44]
In the future it is likely that AI will further revolutionise campaigning by the use of, for example, speech analysis and policy development. [30] Campaign assistants could interact with voters by answering questions which provides information about policies leading to greater accessibility. [30] The cyber security of political campaigns could be enhanced by protecting data from malicious hackers. [30] As technologies advance, AI could create more innovative tools for campaigning, which might also bring "challenges in terms of ethical use and data privacy". [30] Over time, AI-powered political campaigns are likely to become more "data-driven, efficient, and tailored to the evolving dynamics of voter behavior and preferences". [30]
Voice acting is the art of performing a character or providing information to an audience with one's voice. Performers are often called voice actors/actresses in addition to other names. Examples of voice work include animated, off-stage, off-screen, or non-visible characters in various works such as films, dubbed foreign films, anime, television shows, video games, cartoons, documentaries, commercials, audiobooks, radio dramas and comedies, amusement rides, theater productions, puppet shows, and audio games.
Fact-checking is the process of verifying the factual accuracy of questioned reporting and statements. Fact-checking can be conducted before or after the text or content is published or otherwise disseminated. Internal fact-checking is such checking done in-house by the publisher to prevent inaccurate content from being published; when the text is analyzed by a third party, the process is called external fact-checking.
Fake news or information disorder is false or misleading information claiming the aesthetics and legitimacy of news. Fake news often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Although false news has always been spread throughout history, the term fake news was first used in the 1890s when sensational reports in newspapers were common. Nevertheless, the term does not have a fixed definition and has been applied broadly to any type of false information presented as news. It has also been used by high-profile people to apply to any news unfavorable to them. Further, disinformation involves spreading false information with harmful intent and is sometimes generated and propagated by hostile foreign actors, particularly during elections. In some definitions, fake news includes satirical articles misinterpreted as genuine, and articles that employ sensationalist or clickbait headlines that are not supported in the text. Because of this diversity of types of false news, researchers are beginning to favour information disorder as a more neutral and informative term.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Artificial intelligence art is a visual artwork created through the use of an artificial intelligence (AI) program.
Social media was used extensively in the 2020 United States presidential election. Both incumbent president Donald Trump and Democratic Party nominee Joe Biden's campaigns employed digital-first advertising strategies, prioritizing digital advertising over print advertising in the wake of the pandemic. Trump had previously utilized his Twitter account to reach his voters and make announcements, both during and after the 2016 election. The Democratic Party nominee Joe Biden also made use of social media networks to express his views and opinions on important events such as the Trump administration's response to the COVID-19 pandemic, the protests following the murder of George Floyd, and the controversial appointment of Amy Coney Barrett to the Supreme Court.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
Audio deepfake technology, also referred to as voice cloning or deepfake audio, is an application of artificial intelligence designed to generate speech that convincingly mimics specific individuals, often synthesizing phrases or sentences they have never spoken. Initially developed with the intent to enhance various aspects of human life, it has practical applications such as generating audiobooks and assisting individuals who have lost their voices due to medical conditions. Additionally, it has commercial uses, including the creation of personalized digital assistants, natural-sounding text-to-speech systems, and advanced speech translation services.
Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.
Synthesia is a synthetic media generation company that develops software used to create AI generated video content. It is based in London, England.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models often generate output in response to specific prompts. Generative AI systems learn the underlying patterns and structures of their training data, enabling them to create new data.
The 2024 New Hampshire Democratic presidential primary was held on January 23, 2024, as part of the Democratic Party primaries for the 2024 presidential election. The January New Hampshire primary was not sanctioned by the Democratic National Committee (DNC). The DNC-approved 2024 calendar placed the South Carolina primary first, but New Hampshire state law mandates them to hold the first primary in the country, and a "bipartisan group of state politicians", including the chairs of the Democratic and the Republican parties, announced that the state would preserve this status. Thus, the DNC initially stripped all 33 of the state's delegates that would have been allocated to the Democratic National Convention. The delegates will be allowed to be seated at the convention following the holding of a party-backed firehouse primary on April 27.
TrumporBiden2024 was a Twitch channel that featured artificial intelligence (AI) versions of former U.S. president Donald Trump and current president Joe Biden engaging in an endless, profane, comedic political debate. After launching, the channel gained the attention of news media. The channel was presented as satire.
Spamouflage, Dragonbridge, Spamouflage Dragon, Storm 1376, or Taizi Flood is an online propaganda and disinformation operation that uses a network of social media accounts to make posts in favor of the Chinese government and harass dissidents and journalists overseas since 2017. Beginning in the early 2020s, Spamouflage accounts also began making posts about American and Taiwanese politics. It is widely believed that the Chinese government, particularly the Ministry of Public Security, is behind the network. Spamouflage has increasingly used generative artificial intelligence for influence operations. The campaign has largely failed to receive views from real users, although it has attracted some organic engagement using new tactics.
The Iranian government has interfered in the 2024 United States elections through social media efforts and hacking operations. Iranian interference has come amidst larger foreign interference in the 2024 United States elections. The efforts were identified as an effort to tip the race against former president Donald Trump through propaganda and disinformation campaigns. However, Iranian efforts have also targeted Joe Biden and Kamala Harris with similar attacks, which The New York Times stated suggested "a wider goal of sowing internal discord and discrediting the democratic system in the United States more broadly in the eyes of the world."
The Russian government has interfered in the 2024 United States elections through disinformation and propaganda campaigns aimed at damaging Joe Biden, Kamala Harris and other Democrats while boosting the candidacy of Donald Trump and other candidates who support isolationism and undercutting support for Ukraine aid and NATO. Russia's efforts represent the most active threat of foreign interference in the 2024 United States elections and follows Russia's previous pattern of spreading disinformation through fake social media accounts and right-wing YouTube channels in order to divide American society and foster anti-Americanism. On September 4, 2024, the U.S. Department of Justice indicted members of Tenet Media for having received $9.7 million as part of a covert Russian influence operation to co-opt American right-wing influencers to espouse pro-Russian content and conspiracy theories. Many of the followers of the related influencers were encouraged to steal ballots, intimidate voters, and remove or destroy ballot drop offs in the weeks leading up to the election.
China has interfered in the 2024 United States elections through propaganda and disinformation campaigns, primarily linked to its Spamouflage influence operation. The efforts come amidst larger foreign interference in the 2024 United States elections.
Several nations have interfered in the 2024 United States elections. U.S. intelligence agencies have identified China, Iran, and Russia as the most pressing concerns, with Russia being the most active threat.
Artificial intelligence (AI) has been developed rapidly in recent years, and has been used by groups in the 2024 United States presidential election, as well as foreign groups such as China, Russia and Iran. There have also been efforts to control the use of generative artificial intelligence, such as those in California.
{{cite web}}
: CS1 maint: url-status (link){{cite web}}
: CS1 maint: url-status (link)