Artificial intelligence controversies

Last updated

There have been many debates on the societal effects of artificial intelligence (AI), particularly in the late 2010s and 2020s, beginning with the accelerated period of development known as the AI boom. Advocates of AI have emphasized its potential to solve complex problems and improve the quality of life of humans. Detractors have argued that AI presents dangers and challenges involving ethics, plagiarism and theft, fraud, safety and alignment, environment impacts, unemployment, misinformation, artificial superintelligence and existential risks. [1]

Contents

2010s

Microsoft Tay chatbot (2016)

On March 23, 2016, Microsoft released Tay, [2] a chatbot designed to mimic the language patterns of a 19-year-old American girl and learn from interactions with Twitter users. [3] Soon after its launch, Tay began posting racist, sexist, and otherwise inflammatory tweets after Twitter users deliberately taught it offensive phrases and exploited its "repeat after me" capability. [4] Examples of controversial outputs included Holocaust denial and calls for genocide using racial slurs. [4] Within 16 hours of its release, Microsoft suspended the Twitter acount, deleted the offensive tweets, and stated that Tay had suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability." [4] [5] [6] [7] Tay was briefly and accidentally re-released on March 30 during testing, after which it was permanently shut down. [8] [9] Microsoft CEO Satya Nadella later stated that Tay "has had a great influence on how Microsoft is approaching AI" and taught the company the importance of taking accountability. [10]

2020s

Voiceverse NFT plagiarism scandal (2022)

The ponysona of the creator of 15.ai, a non-commercial generative artificial intelligence voice synthesis research project 15chill.png
The ponysona of the creator of 15.ai, a non-commercial generative artificial intelligence voice synthesis research project

On January 14, 2022, voice actor Troy Baker announced a partnership with Voiceverse, a blockchain-based company that marketed proprietary AI voice cloning technology as non-fungible tokens (NFT), triggering immediate backlash over environmental concerns, fears that AI could displace human voice actors, and concerns about fraud. [11] [12] [13] Later that same day, the pseudonymous creator of 15.ai a free, non-commercial AI voice synthesis research projectrevealed through server logs that Voiceverse had used 15.ai to generate voice samples, pitch-shifted them to make them unrecognizable, and falsely marketed them as their own proprietary technology before selling them as NFTs; [14] [15] the developer of 15.ai had previously stated that they had no interest in incorporating NFTs into their work. [15] Voiceverse confessed within an hour and stated that their marketing team had used 15.ai without attribution while rushing to create a demo. [14] [15] News publications and AI watchdog groups universally characterized the incident as theft stemming from generative artificial intelligence. [14] [15] [16] [17]

Théâtre D'opéra Spatial (2022)

Theatre D'opera Spatial (Space Opera Theater; 2022), an award-winning image made using generative artificial intelligence Theatre D'opera Spatial.png
Théâtre D'opéra Spatial (Space Opera Theater; 2022), an award-winning image made using generative artificial intelligence

On August 29, 2022, Jason Michael Allen won first place in the Colorado State Fair's fine arts competition with Théâtre D'opéra Spatial , a digital artwork created using the AI image generator Midjourney, Adobe Photoshop, and AI upscaling tools, becoming one of the first images made using generative AI to win such a prize. [18] [19] [20] [21] Allen disclosed his use of Midjourney when submitting, though the judges did not know it was an AI tool but stated they would have awarded him first place regardless. [19] [22] While there was little contention about the image at the fair, reactions to the win on social media were negative. [23] [22] On September 5, 2023, the United States Copyright Office ruled that the work was not eligible for copyright protection as the human creative input was de minimis and that copyright rules "exclude works produced by non-humans." [20] [24]

Statements on AI risk (2023)

Timnit Gebru has criticized both statements on AI risk as needlessly focusing on speculative risks rather than focusing on known ones. Timnit Gebru crop.jpg
Timnit Gebru has criticized both statements on AI risk as needlessly focusing on speculative risks rather than focusing on known ones.

On March 22, 2023, the Future of Life Institute published an open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. [25] The letter, published a week after the release of OpenAI's GPT-4, asserted that current large language models were "becoming human-competitive at general tasks". [25] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari. [25] [26] [27] The letter was criticized for diverting attention from more immediate societal risks such as algorithmic biases, [28] with Timnit Gebru and others arguing that it amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI. [29]

On May 30, 2023, the Center for AI Safety released a one-sentence statement signed by hundreds of artificial intelligence experts and other notable figures: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." [30] [31] Signatories included Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, including Sam Altman, Demis Hassabis, and Bill Gates. [30] [31] [32] The statement prompted responses from political leaders, including UK Prime Minister Rishi Sunak, who retweeted it with a statement that the UK government would look carefully into it, and White House Press Secretary Karine Jean-Pierre, who commented that AI "is one of the most powerful technologies that we see currently in our time." [33] [34] Skeptics, including from Human Rights Watch, argued that scientists should focus on known risks of AI instead of speculative future risks. [35] [36]

Removal of Sam Altman from OpenAI (2023)

Sam Altman pictured in 2019 Sam Altman TechCrunch SF 2019 Day 2 Oct 3 (cropped) (cropped).jpg
Sam Altman pictured in 2019

On November 17, 2023, OpenAI's board of directors ousted co-founder and chief executive Sam Altman, stating that "the board no longer has confidence in his ability to continue leading OpenAI." [37] The removal was precipitated by employee concerns about his handling of artificial intelligence safety [38] [39] and allegations of abusive behavior. [40] Altman was reinstated on November 22 after pressure from employees and investors, including a letter signed by 745 of OpenAI's 770 employees threatening mass resignations if the board did not resign. [41] [42] [43] The removal and subsequent reinstatement caused widespread reactions, including Microsoft's stock falling nearly three percent following the initial announcement and then rising over two percent to an all-time high after Altman was hired to lead a Microsoft AI research team before his reinstatement. [44] [45] The incident also prompted investigations from the Competition and Markets Authority and the Federal Trade Commission into Microsoft's relationship with OpenAI. [46] [47]

Taylor Swift deepfake pornography controversy (2024)

In late January 2024, sexually explicit AI-generated deepfake images of Taylor Swift were proliferated on X, with one post reported to have been seen over 47 million times before its removal. [48] [49] Disinformation research firm Graphika traced the images back to 4chan, [50] while members of a Telegram group had discussed ways to circumvent censorship safeguards of AI image generators to create pornographic images of celebrities. [51] The images prompted responses from anti-sexual assault advocacy groups, US politicians, and Swifties. [52] [53] Microsoft CEO Satya Nadella called the incident "alarming and terrible." [54] X briefly blocked searches of Swift's name on January 27, 2024, [55] and Microsoft enhanced its text-to-image model safeguards to prevent future abuse. [56] On January 30, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar, and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent. [57]

Google Gemini image generation controversy (2024)

Google Gemini's response when asked to "generate a picture of a U.S. senator from the 1800s" in February 2024, as shown by The Verge Diverse US senators from the 1800s - Gemini.png
Google Gemini's response when asked to "generate a picture of a U.S. senator from the 1800s" in February 2024, as shown by The Verge

In February 2024, social media users reported that Google's Gemini chatbot was generating images that featured people of color and women in historically inaccurate contextssuch as Vikings, Nazi soldiers, and the Founding Fathers and refusing prompts to generate images of white people. The images were derided on social media, including by conservatives who cited them as evidence of Google's "wokeness", [58] [59] [60] and criticized by Elon Musk, who denounced Google's products as biased and racist. [61] [62] [63] In response, Google paused Gemini's ability to generate images of people. [64] [65] [66] Google executive Prabhakar Raghavan released a statement explaining that Gemini had "overcompensate[d]" in its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong". [67] [68] [69] Google CEO Sundar Pichai called the incident offensive and unacceptable in an internal memo, promising structural and technical changes, [70] [71] [72] and several employees in Google's trust and safety team were laid off days later. [73] [74] The market reacted negatively, with Google's stock falling by 4.4 percent, [75] and Pichai faced growing calls to resign. [76] [77] [78] The image generation feature was relaunched in late August 2024, powered by its new Imagen 3 model. [79] [80]

References

  1. Müller, Vincent C. (April 30, 2020). "Ethics of Artificial Intelligence and Robotics". Stanford Encyclopedia of Philosophy. Archived from the original on 10 October 2020.
  2. Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to" . The Independent. Archived from the original on 2022-05-26.
  3. Rob Price (March 24, 2016). "Microsoft is deleting its AI chatbot's incredibly racist tweets". Business Insider. Archived from the original on January 30, 2019.
  4. 1 2 3 Ohlheiser, Abby (25 March 2016). "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post . Archived from the original on March 25, 2016. Retrieved 25 March 2016.
  5. Hern, Alex (24 March 2016). "Microsoft scrambles to limit PR damage over abusive AI bot Tay". The Guardian. Archived from the original on December 18, 2016. Retrieved December 16, 2016.
  6. Worland, Justin (24 March 2016). "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time . Archived from the original on 25 March 2016. Retrieved 25 March 2016.
  7. Lee, Peter (25 March 2016). "Learning from Tay's introduction". Official Microsoft Blog. Microsoft. Archived from the original on June 30, 2016. Retrieved 29 June 2016.
  8. Graham, Luke (30 March 2016). "Tay, Microsoft's AI program, is back online". CNBC. Archived from the original on September 20, 2017. Retrieved 30 March 2016.
  9. Meyer, David (30 March 2016). "Microsoft's Tay 'AI' Bot Returns, Disastrously". Fortune. Archived from the original on March 30, 2016. Retrieved 30 March 2016.
  10. Moloney, Charlie (29 September 2017). ""We really need to take accountability", Microsoft CEO on the 'Tay' chatbot". Access AI. Archived from the original on 1 October 2017. Retrieved 30 September 2017.
  11. McWhertor, Michael (January 14, 2022). "The Last of Us voice actor promotes 'voice NFTs,' pulls out after criticism". Polygon . Archived from the original on January 14, 2022. Retrieved October 5, 2025.
  12. Brown, Andy (January 14, 2022). "Troy Baker criticised for supporting company that makes AI "voice NFTs"". NME . Archived from the original on October 7, 2025. Retrieved October 5, 2025.
  13. McKean, Kirk (January 14, 2022). "Actor Troy Baker endorses NFT voice AI that aims to replace actors". USA Today . Archived from the original on July 30, 2025. Retrieved October 5, 2025.
  14. 1 2 3 Phillips, Tom (January 17, 2022). "Troy Baker-backed NFT firm admits using voice lines taken from another service without permission". Eurogamer . Archived from the original on January 17, 2022. Retrieved October 5, 2025.
  15. 1 2 3 4 Innes, Ruby (January 18, 2022). "Voiceverse Is The Latest NFT Company Caught Using Someone Else's Content". Kotaku Australia . Archived from the original on July 26, 2024. Retrieved October 5, 2025.
  16. "Voiceverse NFT caught plagiarising voice lines from AI service 15.ai". AI, Algorithmic, and Automation Incidents and Controversies . January 2022. Archived from the original on October 4, 2025. Retrieved October 5, 2025.
  17. Sonali, Pednekar (January 14, 2022). Lam, Khoa (ed.). "Incident 277: Voices Created Using Publicly Available App Stolen and Resold as NFT without Attribution". AI Incident Database . Archived from the original on January 13, 2025. Retrieved October 5, 2025.
  18. "Explained: The controversy surrounding the AI-generated artwork that won US competition". Firstpost . 2022-09-05. Retrieved 2023-03-26.
  19. 1 2 Roose, Kevin (2022-09-02). "An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy" . The New York Times . ISSN   0362-4331 . Retrieved 2023-03-26.
  20. Todorovic, Milos (2024). "AI and Heritage: A Discussion on Rethinking Heritage in a Digital World". International Journal of Cultural and Social Studies. 10 (1): 4. doi:10.46442/intjcss.1397403 . Retrieved 4 July 2024.
  21. 1 2 Metz, Rachel (2022-09-03). "AI won an art contest, and artists are furious". CNN . Retrieved 2023-03-26.
  22. Harwell, Drew (2022-09-02). "He used AI to win a fine-arts competition. Was it cheating?" . Washington Post . Archived from the original on 24 September 2023. Retrieved 2024-09-04.
  23. Helmore, Edward (24 September 2023). "An old master? No, it's an image AI just knocked up ... and it can't be copyrighted". The Guardian .
  24. 1 2 3 "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-19.
  25. Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN   0362-4331 . Retrieved 2024-08-20.
  26. Hern, Alex (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN   0261-3077 . Retrieved 2024-08-20.
  27. Paul, Kari (2023-04-01). "Letter signed by Elon Musk demanding AI research pause sparks controversy". The Guardian. ISSN   0261-3077 . Retrieved 2023-04-14.
  28. Anderson, Margo (7 April 2023). "'AI Pause' Open Letter Stokes Fear and Controversy". IEEE Spectrum .
  29. 1 2 "Statement on AI Risk". Center for AI Safety. May 30, 2023.
  30. 1 2 Roose, Kevin (2023-05-30). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN   0362-4331 . Retrieved 2023-05-30.
  31. Vincent, James (2023-05-30). "Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement". The Verge. Retrieved 2024-07-03.
  32. "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
  33. "President Biden warns artificial intelligence could 'overtake human thinking'". USA TODAY. Retrieved 2023-06-03.
  34. Ryan-Mosley, Tate (12 June 2023). "It's time to talk about the real AI risks". MIT Technology Review. Retrieved 2024-07-03.
  35. Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say". Washington Post. ISSN   0190-8286 . Retrieved 2024-07-03.
  36. "OpenAI announces leadership transition" (Press release). OpenAI. OpenAI. 2023-11-17. Retrieved 2025-09-23.
  37. Tong, Anna; Dastin, Jeffrey; Hu, Krystal (2023-11-23). "Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say". Reuters. Archived from the original on December 11, 2023. Retrieved 2023-11-23.
  38. "OpenAI made huge breakthrough before ousting Sam Altman, introducing Q*". TweakTown. 2023-11-22. Archived from the original on November 23, 2023. Retrieved 2023-11-23.
  39. Tiku, Nitasha (December 8, 2023). "OpenAI leaders warned of abusive behavior before Sam Altman's ouster". The Washington Post . Archived from the original on December 8, 2023. Retrieved December 8, 2023.
  40. Efrati, Amir (November 19, 2023). "Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won't Return". The Information . Archived from the original on November 20, 2023. Retrieved November 19, 2023.
  41. Fried, Ina. "OpenAI's drama has two likely endings. Microsoft will like either one". Axios . Retrieved November 21, 2023.
  42. Kastrenakes, Jacob; Warren, Tom (November 20, 2023). "Hundreds of OpenAI employees threaten to resign and join Microsoft". The Verge . Archived from the original on November 20, 2023. Retrieved November 20, 2023.
  43. Metz, Rachel (November 17, 2023). "OpenAI Ousts Altman as CEO, Board Says It Lost Confidence In Him". Bloomberg News. Archived from the original on November 20, 2023. Retrieved November 18, 2023.
  44. Hur, Krystal (November 20, 2023). "Microsoft stock hits all-time high after hiring former OpenAI CEO Sam Altman". CNN. Archived from the original on November 20, 2023. Retrieved November 20, 2023.
  45. Gammell, Katharine; Seal, Thomas; Bergen, Mark (December 8, 2023). "Microsoft's OpenAI Ties Face Potential UK Antitrust Probe". Bloomberg News. Archived from the original on December 8, 2023. Retrieved December 8, 2023.
  46. Nylen, Leah (December 8, 2023). "Microsoft's OpenAI Investment Risks Scrutiny from US, UK Regulators". Bloomberg News. Archived from the original on December 8, 2023. Retrieved December 8, 2023.
  47. "Taylor Swift deepfakes spread online, sparking outrage". CBS News. January 26, 2024. Archived from the original on 6 February 2024. Retrieved February 7, 2024.
  48. Wilkes, Emma (February 5, 2024). "Taylor Swift deepfakes spark calls for new legislation". NME . Archived from the original on February 7, 2024. Retrieved February 7, 2024.
  49. Hsu, Tiffany (February 5, 2024). "Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says". The New York Times . Archived from the original on February 9, 2024. Retrieved February 10, 2024.
  50. Belanger, Ashley (January 26, 2024). "Toxic Telegram group produced X's X-rated fake AI Taylor Swift images, report says". Ars Technica . Archived from the original on January 25, 2024. Retrieved February 7, 2024.
  51. Kastrenakes, Jacob (January 26, 2024). "White House calls for legislation to stop Taylor Swift AI fakes". The Verge. Archived from the original on June 10, 2024. Retrieved June 10, 2024.
  52. Rosenzweig-Ziff, Dan. "AI deepfakes of Taylor Swift spread on X. Here's what to know". The Washington Post . Archived from the original on January 30, 2024. Retrieved February 7, 2024.
  53. Yang, Angela (January 30, 2024). "Microsoft CEO Satya Nadella calls for coordination to address AI risk". NBC News. Archived from the original on February 25, 2024. Retrieved February 26, 2024.
  54. Saner, Emine (January 31, 2024). "Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'". The Guardian . ISSN   0261-3077. Archived from the original on February 27, 2024. Retrieved February 26, 2024.
  55. Weatherbed, Jess (January 25, 2024). "Trolls have flooded X with graphic Taylor Swift AI fakes". The Verge. Archived from the original on January 25, 2024. Retrieved February 7, 2024.
  56. Montgomery, Blake (January 31, 2024). "Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes". The Guardian . Archived from the original on January 31, 2024. Retrieved January 31, 2024.
  57. 1 2 Robertson, Adi (February 21, 2024). "Google apologizes for 'missing the mark' after Gemini generated racially diverse Nazis". The Verge . Archived from the original on February 21, 2024. Retrieved February 22, 2024.
  58. Franzen, Carl (February 21, 2024). "Google Gemini's 'wokeness' sparks debate over AI censorship". VentureBeat . Archived from the original on February 22, 2024. Retrieved February 22, 2024.
  59. Titcomb, James (February 21, 2024). "Google chatbot ridiculed for ethnically diverse images of Vikings and knights" . The Daily Telegraph . ISSN   0307-1235. Archived from the original on February 22, 2024. Retrieved February 22, 2024.
  60. Tan, Kwan Wei Kevin (February 22, 2024). "Elon Musk is accusing Google of running 'insane racist, anti-civilizational programming' with its AI" . Business Insider . Archived from the original on February 23, 2024. Retrieved February 24, 2024.
  61. Hart, Robert (February 23, 2024). "Elon Musk Targets Google Search After Claiming Company AI Is 'Insane' And 'Racist'" . Forbes . Archived from the original on February 23, 2024. Retrieved February 24, 2024.
  62. Nolan, Beatrice (February 27, 2024). "Elon Musk is going to war with Google" . Business Insider . Archived from the original on February 27, 2024. Retrieved February 28, 2024.
  63. Kharpal, Arjun (February 22, 2024). "Google pauses Gemini AI image generator after it created inaccurate historical pictures". CNBC. Archived from the original on February 22, 2024. Retrieved February 22, 2024.
  64. Milmo, Dan (February 22, 2024). "Google pauses AI-generated images of people after ethnicity criticism". The Guardian . ISSN   0261-3077. Archived from the original on February 22, 2024. Retrieved February 22, 2024.
  65. Duffy, Catherine; Thorbecke, Clare (February 22, 2024). "Google halts AI tool's ability to produce images of people after backlash". CNN Business. Archived from the original on February 22, 2024. Retrieved February 22, 2024.
  66. Roth, Emma (February 23, 2024). "Google explains Gemini's 'embarrassing' AI pictures of diverse Nazis". The Verge . Archived from the original on February 23, 2024. Retrieved February 24, 2024.
  67. Moon, Mariella (February 24, 2024). "Google explains why Gemini's image generation feature overcorrected for diversity". Engadget . Archived from the original on February 24, 2024. Retrieved February 24, 2024.
  68. Ingram, David (February 23, 2024). "Google says Gemini AI glitches were product of effort to address 'traps'". NBC News. Archived from the original on February 23, 2024. Retrieved February 24, 2024.
  69. Love, Julia (February 27, 2024). "Google CEO Blasts 'Unacceptable' Gemini Image Generation Failure" . Bloomberg News. Archived from the original on February 28, 2024. Retrieved February 28, 2024.
  70. Allyn, Bobby (March 3, 2024). "Google CEO Pichai says Gemini's AI image results "offended our users"". NPR. Archived from the original on February 28, 2024. Retrieved March 3, 2024.
  71. Albergotti, Reed (February 27, 2024). "Google CEO calls AI tool's controversial responses 'completely unacceptable'". Semafor . Archived from the original on February 28, 2024. Retrieved February 28, 2024.
  72. Alba, Davey; Ghaffary, Shirin (March 1, 2024). "Google Trims Jobs in Trust and Safety While Others Work 'Around the Clock'" . Bloomberg News. Archived from the original on March 1, 2024. Retrieved March 3, 2024.
  73. Victor, Jon (March 2, 2024). "Google Lays Off Trust and Safety Staff Amid Gemini Backlash" . The Information . Archived from the original on March 3, 2024. Retrieved March 3, 2024.
  74. Wittenstein, Jeran (February 26, 2024). "Alphabet Drops During Renewed Fears About Google's AI Offerings" . Bloomberg News. Archived from the original on February 26, 2024. Retrieved February 28, 2024.
  75. Langley, Hugh (March 1, 2024). "There are growing calls for Google CEO Sundar Pichai to step down" . Business Insider . Archived from the original on March 1, 2024. Retrieved March 3, 2024.
  76. Kafka, Peter (February 26, 2024). "Google has another 'woke' AI problem with Gemini — and it's going to be hard to fix" . Business Insider . Archived from the original on February 26, 2024. Retrieved February 26, 2024.
  77. Cao, Sissi (February 15, 2024). "Criticism is Mounting Over Sundar Pichai's Stumbles as Google CEO". The New York Observer . Archived from the original on April 8, 2024. Retrieved May 21, 2024.
  78. Grant, Nico (August 28, 2024). "Google Says It Fixed Image Generator That Failed to Depict White People" . The New York Times . ISSN   0362-4331. Archived from the original on August 28, 2024. Retrieved August 28, 2024.
  79. Cerullo, Megan (August 29, 2024). "Google relaunches Gemini AI tool that lets users create images of people". CBS News. Archived from the original on August 29, 2024. Retrieved September 2, 2024.