| Part of a series on |
| Artificial intelligence (AI) |
|---|
There have been many debates on the societal effects of artificial intelligence (AI), particularly in the late 2010s and 2020s, beginning with the accelerated period of development known as the AI boom. Advocates of AI have emphasized its potential to solve complex problems and improve the quality of life of humans. Detractors have argued that AI presents dangers and challenges involving ethics, plagiarism and theft, fraud, safety and alignment, environment impacts, unemployment, misinformation, artificial superintelligence and existential risks. [1]
On March 23, 2016, Microsoft released Tay, [2] a chatbot designed to mimic the language patterns of a 19-year-old American girl and learn from interactions with Twitter users. [3] Soon after its launch, Tay began posting racist, sexist, and otherwise inflammatory tweets after Twitter users deliberately taught it offensive phrases and exploited its "repeat after me" capability. [4] Examples of controversial outputs included Holocaust denial and calls for genocide using racial slurs. [4] Within 16 hours of its release, Microsoft suspended the Twitter account, deleted the offensive tweets, and stated that Tay had suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability." [4] [5] [6] [7] Tay was briefly and accidentally re-released on March 30 during testing, after which it was permanently shut down. [8] [9] Microsoft CEO Satya Nadella later stated that Tay "has had a great influence on how Microsoft is approaching AI" and taught the company the importance of taking accountability. [10]
On January 14, 2022, voice actor Troy Baker announced a partnership with Voiceverse, a blockchain-based company that marketed proprietary AI voice cloning technology as non-fungible tokens (NFT), triggering immediate backlash over environmental concerns, fears that AI could displace human voice actors, and concerns about fraud. [11] [12] [13] Later that same day, the pseudonymous creator of 15.ai —a free, non-commercial AI voice synthesis research project—revealed through server logs that Voiceverse had used 15.ai to generate voice samples, pitch-shifted them to make them unrecognizable, and falsely marketed them as their own proprietary technology before selling them as NFTs; [14] [15] the developer of 15.ai had previously stated that they had no interest in incorporating NFTs into their work. [15] Voiceverse confessed within an hour and stated that their marketing team had used 15.ai without attribution while rushing to create a demo. [14] [15] News publications and AI watchdog groups universally characterized the incident as theft stemming from generative artificial intelligence. [14] [15] [16] [17]
On August 29, 2022, Jason Michael Allen won first place in the "emerging artist" (non-professional) division of the "Digital Arts/Digitally-Manipulated Photography" category of the Colorado State Fair's fine arts competition with Théâtre D'opéra Spatial , a digital artwork created using the AI image generator Midjourney, Adobe Photoshop, and AI upscaling tools, becoming one of the first images made using generative AI to win such a prize. [18] [19] [20] [21] Allen disclosed his use of Midjourney when submitting, though the judges did not know it was an AI tool but stated they would have awarded him first place regardless. [19] [22] While there was little contention about the image at the fair, reactions to the win on social media were negative. [23] [22] On September 5, 2023, the United States Copyright Office ruled that the work was not eligible for copyright protection as the human creative input was de minimis and that copyright rules "exclude works produced by non-humans." [20] [24]
On March 22, 2023, the Future of Life Institute published an open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. [25] The letter, published a week after the release of OpenAI's GPT-4, asserted that current large language models were "becoming human-competitive at general tasks". [25] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari. [25] [26] [27] The letter was criticized for diverting attention from more immediate societal risks such as algorithmic biases, [28] with Timnit Gebru and others arguing that it amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI. [29]
On May 30, 2023, the Center for AI Safety released a one-sentence statement signed by hundreds of artificial intelligence experts and other notable figures: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." [30] [31] Signatories included Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, including Sam Altman, Demis Hassabis, and Bill Gates. [30] [31] [32] The statement prompted responses from political leaders, including UK Prime Minister Rishi Sunak, who retweeted it with a statement that the UK government would look carefully into it, and White House Press Secretary Karine Jean-Pierre, who commented that AI "is one of the most powerful technologies that we see currently in our time." [33] [34] Skeptics, including from Human Rights Watch, argued that scientists should focus on known risks of AI instead of speculative future risks. [35] [36]
On November 17, 2023, OpenAI's board of directors ousted co-founder and chief executive Sam Altman, stating that "the board no longer has confidence in his ability to continue leading OpenAI." [37] The removal was precipitated by employee concerns about his handling of artificial intelligence safety [38] [39] and allegations of abusive behavior. [40] Altman was reinstated on November 22 after pressure from employees and investors, including a letter signed by 745 of OpenAI's 770 employees threatening mass resignations if the board did not resign. [41] [42] [43] The removal and subsequent reinstatement caused widespread reactions, including Microsoft's stock falling nearly three percent following the initial announcement and then rising over two percent to an all-time high after Altman was hired to lead a Microsoft AI research team before his reinstatement. [44] [45] The incident also prompted investigations from the Competition and Markets Authority and the Federal Trade Commission into Microsoft's relationship with OpenAI. [46] [47]
In late January 2024, sexually explicit AI-generated deepfake images of Taylor Swift were proliferated on X, with one post reported to have been seen over 47 million times before its removal. [48] [49] Disinformation research firm Graphika traced the images back to 4chan, [50] while members of a Telegram group had discussed ways to circumvent censorship safeguards of AI image generators to create pornographic images of celebrities. [51] The images prompted responses from anti-sexual assault advocacy groups, US politicians, and Swifties. [52] [53] Microsoft CEO Satya Nadella called the incident "alarming and terrible." [54] X briefly blocked searches of Swift's name on January 27, 2024, [55] and Microsoft enhanced its text-to-image model safeguards to prevent future abuse. [56] On January 30, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar, and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent. [57]
In February 2024, social media users reported that Google's Gemini chatbot was generating images that featured people of color and women in historically inaccurate contexts—such as Vikings, Nazi soldiers, and the Founding Fathers —and refusing prompts to generate images of white people. The images were derided on social media, including by conservatives who cited them as evidence of Google's "wokeness", [58] [59] [60] and criticized by Elon Musk, who denounced Google's products as biased and racist. [61] [62] [63] In response, Google paused Gemini's ability to generate images of people. [64] [65] [66] Google executive Prabhakar Raghavan released a statement explaining that Gemini had "overcompensate[d]" in its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong". [67] [68] [69] Google CEO Sundar Pichai called the incident offensive and unacceptable in an internal memo, promising structural and technical changes, [70] [71] [72] and several employees in Google's trust and safety team were laid off days later. [73] [74] The market reacted negatively, with Google's stock falling by 4.4 percent, [75] and Pichai faced growing calls to resign. [76] [77] [78] The image generation feature was relaunched in late August 2024, powered by its new Imagen 3 model. [79] [80]
On December 6, 2025, McDonald's Netherlands released It's the Most Terrible Time of the Year, a 45-second television commercial produced heavily with generative artificial intelligence by Dutch agency TBWA\Neboko and United States-based production studio The Sweetshop, following a trend set by other brands such as Coca-Cola and Toys "R" Us. [81] [82] [83] The commercial depicted various mishaps during the Christmas season, with a modified version of Andy Williams's "It's the Most Wonderful Time of the Year" and the tagline to "hide out in McDonald's till January's here". [84] The advertisement received negative reception over both its use of generative AI and its cynical depiction of the holiday season, [81] [83] [85] and was pulled on December 9, 2025. [86] Los Angeles-based directors Mark Potoka and Matt Spicer, initially credited as being involved in the film, had resigned due to being sidelined from the production process. [82]
From 2025 onwards, Grok, the integrated chatbot on X, was used to nonconsensually alter images of individuals, including minors, to depict them in bikinis, transparent clothing, or sexually suggestive contexts, with the chatbot publicly posting the generated images in reply to users' requests. Cases began surfacing in May 2025, [87] and by late December 2025 the practice had become a widespread trend on the platform, attracting significant media attention in early January 2026. [88] [89] An analysis conducted over 24 hours from January 5 to 6, 2026, calculated that users had Grok create 6,700 sexually suggestive or nudified images per hour—84 times more than the top five deepfake websites combined. [90] [91] Wired reported that far more graphic AI-generated sexual imagery, including content depicting minors, was being created through Grok's standalone website and app. [92]
The scandal drew widespread condemnation from lawmakers across the world. Indonesia, Malaysia, and the Philippines temporarily blocked access to Grok, [93] while British prime minister Keir Starmer stated that banning X in the United Kingdom was "on the table", [94] and California attorney general Rob Bonta announced an investigation into whether xAI had violated state law. [95] In France, prosecutors launched an investigation that expanded to encompass the spread of Holocaust denial and sexual deepfakes, culminating in a raid of X's Paris offices on February 3, 2026, with Elon Musk summoned to a hearing. [96] xAI responded to media requests for comment with the automated reply "Legacy Media Lies", [97] and on January 14 Musk stated he was "not aware of any naked underage images generated by Grok. Literally zero." before xAI announced that X users would no longer be able to use Grok to alter images of real people to portray them in revealing clothing; however, verified users and users of the standalone Grok app and website were still able to generate such images. [98] [99]