Taylor Swift deepfake pornography controversy

Last updated

In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X (formerly Twitter). Several artificial images of Swift of a sexual or violent nature were quickly spread, [1] with one post reported to have been seen over 47 million times before its eventual removal. [2] The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. [3] Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography. [4]

Contents

Background

American musician Taylor Swift has reportedly been the target of misogyny and slut-shaming throughout her career. [5] [6] American technology corporation Microsoft offers AI image creators called Microsoft Designer and Bing Image Creator, which employ censorship safeguards to prevent users from generating unsafe or objectionable content. Members of a Telegram group discussed ways to circumvent these censors to creative pornographic images of celebrities. [7] Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community. [8] [9]

Reactions

For some, the deepfake images of Swift immediately became a source of controversy and outrage. Other internet users found them humorous and absurd, such as the image making it appear as though Swift was to engage in sexual intercourse with Oscar the Grouch. The images drew condemnations from Rape, Abuse & Incest National Network and SAG-AFTRA. The latter group, who had been following issues regarding AI-generated media prior to Swift's involvement, considered the images "upsetting, harmful and deeply concerning." [10] Microsoft CEO Satya Nadella, whose company's products were believed to be used to make these images, responded to the controversy as "alarming and terrible", further stating his belief that "we all benefit when the online world is a safe world." [11] [12] The content also sparked race-relations debates with some questioning whether it was racist to be offended by deepfaked images where Swift is appearing ready for sexual acts with the entire Kansas City Chiefs, most of whom are African American.

Taylor Swift

A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." [13] [14]

Politicians

White House press secretary Karine Jean-Pierre expressed concern over the counterfeit images, deeming them "alarming", and emphasized the obligation of social media platforms to curb the dissemination of misinformation. [15] Several members of American politics called for legislation against AI-generated pornography. [16] Later in the month, a bipartisan bill was introduced by US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley. The bill would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent. [17] The European Union struck a deal in February 2024 on a similar bill that would criminalize deepfake pornography, as well as online harassment and revenge porn, by mid-2027. [18]

Social media platforms

X responded to the sharing of these images on their own website with claims they would suspend accounts that participated in their spread. Despite this, the photos continued to be reshared among accounts of X, and spread to other platforms including Instagram and Reddit. [19] X enforces a "synthetic and manipulated media policy", which has been criticized for its efficacy. [20] [21] They briefly blocked searches of Swift's name on January 27, 2024, [22] reinstating them two days later. [23]

Swifties

Fans of Taylor Swift, known as Swifties, responded to the circulation of these images by pushing the hashtag #ProtectTaylorSwift to trend on X. They also flooded other hashtags related to the images with more positive images and videos of her live performances. [24]

Cultural significance

Deepfake pornography has remained highly controversial and has affected figures from other celebrities to ordinary people, most of whom are women. [25] Journalists have opined that the involvement of a prominent public figure such as Swift in the dissemination of AI-generated pornography could bring public awareness and political reform to the issue. [26]

With the rampant increase of deepfake pornography, questions regarding the issue of consent and privacy have emerged. Countless individuals, many of whom are women and men who have been directly affected by the nonconsensual use of deepfake pornography, are left questioning what actions, if any, can be taken to prevent this exploitation or at least remove nonconsensual content from public platforms. [27] While many states have laws and punishments in place regarding the creation or solicitation of revenge porn, only four (California, New York, Georgia, and Virginia) possess laws concerning nonconsensual deepfakes. [27] Currently, deepfakes exist in both a legal and ethical gray area when it comes to issues of consent.

Women

Women are disproportionately targeted as victims of the making and public distribution of deepfake pornography. [28] This phenomenon puts women and female artists at risk of experiencing online violence at much higher rates than were seen in a pre-AI society. [29] Women are three times more likely to be victims of cyber violence than men and two times more likely to be victims of severe cyber abuse, which includes AI-generated revenge porn. [29] Overall, it has been reported that 96% of the deepfakes that have been created are non-consensual sexual deepfakes, and 99% of those feature women. [30]

Related Research Articles

Rape pornography is a subgenre of pornography involving the description or depiction of rape. Such pornography either involves simulated rape, wherein sexually consenting adults feign rape, or it involves actual rape. Victims of actual rape may be coerced to feign consent such that the pornography produced deceptively appears as simulated rape or non-rape pornography. The depiction of rape in non-pornographic media is not considered rape pornography. Simulated scenes of rape and other forms of sexual violence have appeared in mainstream cinema, including rape and revenge films, almost since its advent.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

<span class="mw-page-title-main">Pornography</span> Portrayal of sexual subject matter

Pornography is sexual subject material such as a picture, video, text, or audio that is intended for sexual arousal. Made for consumption by adults, pornographic depictions have evolved from cave paintings, some forty millennia ago, to modern virtual reality presentations. A general distinction of adults-only sexual content is made-classifying it as pornography or erotica.

Legal frameworks around fictional pornography depicting minors vary depending on country and nature of the material involved. Laws against production, distribution, and consumption of child pornography generally separate images into three categories: real, pseudo, and virtual. Pseudo-photographic child pornography is produced by digitally manipulating non-sexual images of real minors to make pornographic material. Virtual child pornography depicts purely fictional characters. "Fictional pornography depicting minors," as covered in this article, includes these latter two categories, whose legalities vary by jurisdiction, and often differ with each other and with the legality of real child pornography.

Sexting is sending, receiving, or forwarding sexually explicit messages, photographs, or videos, primarily between mobile phones. It may also include the use of a computer or any digital device. The term was first popularized early in the 21st century and is a portmanteau of sex and texting, where the latter is meant in the wide sense of sending a text possibly with images. Sexting is not an isolated phenomenon but one of many different types of sexual interaction in digital contexts that is related to sexual arousal.

Child pornography is erotic material that depicts persons under the designated age of majority. The precise characteristics of what constitutes child pornography varies by criminal jurisdiction.

Rule 34 is an internet meme which claims that some form of pornography exists concerning every possible topic. The concept is commonly depicted as fan art of normally non-erotic subjects engaging in sexual activity. It can also include writings, animations, images, GIFs and any other form of media to which the internet provides opportunities for proliferation and redistribution.

Revenge porn is the distribution of sexually explicit images or videos of individuals without their consent, with the punitive intention to create public humiliation or character assassination out of revenge against the victim. The material may have been made by an ex-partner from an intimate relationship with the knowledge and consent of the subject at the time, or it may have been made without their knowledge. The subject may have experienced sexual violence during the recording of the material, in some cases facilitated by psychoactive chemicals such as date rape drugs which also cause a reduced sense of pain and involvement in the sexual act, dissociative effects and amnesia.

xHamster, stylized as XHAMSTER, is a pornographic video sharing and streaming website, based in Limassol, Cyprus. xHamster serves user-submitted pornographic videos, webcam models, pornographic photographs, and erotic literature, and incorporates social networking features. xHamster was founded in 2007. As of August 2024, it is the 33rd-most-visited website in the world, and the third-most-visited adult website, after Pornhub and XVideos.

The Intimate Privacy Protection Act (IPPA) is a proposed amendment to Title 18 of the United States Code that would make it a crime to distribute nonconsensual pornography. The bill would "provide that it is unlawful to knowingly distribute a private, visual depiction of a person’s intimate parts or of a person engaging in sexually explicit conduct, with reckless disregard for the person's lack of consent to the distribution." The bill was introduced by Representative Jackie Speier in 2016.

Celeb Jihad is a website known for sharing leaked private videos and photos as well as faked ones of celebrities as a form of jihad satire. The Daily Beast describes it as a "satirical celebrity gossip website."

<span class="mw-page-title-main">Deepfake</span> Realistic artificially generated media

Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.

<span class="mw-page-title-main">Cyberflashing</span> Sending of obscene images to strangers via Internet services

Cyberflashing involves sending obscene pictures to strangers online, often done through Bluetooth or AirDrop transfers between devices.

Fake nude photography is the creation of nude photographs designed to appear as genuine nudes of an individual. The motivations for the creation of these modified photographs include curiosity, sexual gratification, the stigmatization or embarrassment of the subject, and commercial gain, such as through the sale of the photographs via pornographic websites. Fakes can be created using image editing software or through machine learning. Fake images created using the latter method are called deepfakes.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.

<span class="mw-page-title-main">AI boom</span> Ongoing period of rapid progress in artificial intelligence

The AI boom is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind as well as large language models and generative AI applications developed by OpenAI. This period is sometimes referred to as an AI spring, to contrast it with previous AI winters.

Graphika is an American social network analysis company known for tracking online disinformation. It was established in 2013.

Generative AI pornography or simply AI pornography refers to digitally created explicit content produced through generative artificial intelligence (AI) technologies. Unlike traditional pornography, which involves real actors and cameras, this content is synthesized entirely by AI algorithms. These algorithms, including Generative adversarial network (GANs) and text-to-image models, generate lifelike images, videos, or animations from textual descriptions or datasets.

References

  1. "Taylor Swift deepfakes spread online, sparking outrage". CBS News. January 26, 2024. Archived from the original on 6 February 2024. Retrieved February 7, 2024.
  2. Wilkes, Emma (February 5, 2024). "Taylor Swift deepfakes spark calls for new legislation". NME . Archived from the original on February 7, 2024. Retrieved February 7, 2024.
  3. Weatherbed, Jess (January 25, 2024). "Trolls have flooded X with graphic Taylor Swift AI fakes". The Verge. Archived from the original on January 25, 2024. Retrieved February 7, 2024.
  4. "Taylor Swift AI Deepfake Spurs Congressional Legislative Action". natlawreview.com. Retrieved 2024-11-25.
  5. Wahi, Sukriti (March 3, 2021). "Every Time Taylor Swift Perfectly Shut Down A Sexist Interview Question". Elle. Archived from the original on April 22, 2022. Retrieved November 4, 2021.
  6. Davis, Allison P. (June 28, 2018). "The Taylor Swift Slut-Shaming Continues". The Cut . Archived from the original on May 24, 2022. Retrieved November 4, 2021.
  7. Belanger, Ashley (January 26, 2024). "Toxic Telegram group produced X's X-rated fake AI Taylor Swift images, report says". Ars Technica . Archived from the original on January 25, 2024. Retrieved February 7, 2024.
  8. Hsu, Tiffany (February 5, 2024). "Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says". The New York Times . Archived from the original on February 9, 2024. Retrieved February 10, 2024.
  9. Belanger, Ashley (February 5, 2024). "4chan daily challenge sparked deluge of explicit AI Taylor Swift images". Ars Technica. Archived from the original on February 9, 2024. Retrieved February 9, 2024.
  10. "SAG-AFTRA Slams Explicit Taylor Swift AI Images: 'Upsetting, Harmful' and 'Must Be Made Illegal'". Variety . January 27, 2024. Archived from the original on February 7, 2024. Retrieved February 7, 2024.
  11. Yang, Angela (January 30, 2024). "Microsoft CEO Satya Nadella calls for coordination to address AI risk". NBC News. Archived from the original on February 25, 2024. Retrieved February 26, 2024.
  12. Pilley, Max (January 27, 2024). "Microsoft CEO: Taylor Swift AI deepfakes are "alarming and terrible"". NME. Archived from the original on 7 February 2024. Retrieved February 7, 2024.
  13. Zhang, Cat (2024-01-26). "The Swiftie Fight to Protect Taylor Swift From AI". The Cut . Archived from the original on 2024-01-30. Retrieved 2024-03-06.
  14. Specter, Emma (2024-01-26). "If Anyone Can Stop the Coming AI Hellscape, It's Taylor Swift". Vogue . Archived from the original on 2024-02-06. Retrieved 2024-03-06.
  15. Kastrenakes, Jacob (January 26, 2024). "White House calls for legislation to stop Taylor Swift AI fakes". The Verge. Archived from the original on June 10, 2024. Retrieved June 10, 2024.
  16. Beaumont-Thomas, Ben (January 26, 2024). "Taylor Swift deepfake pornography sparks renewed calls for US legislation". The Guardian . Archived from the original on January 29, 2024. Retrieved February 7, 2024.
  17. Montgomery, Blake (January 31, 2024). "Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes". The Guardian . Archived from the original on January 31, 2024. Retrieved January 31, 2024.
  18. Goujard, Clothilde (February 6, 2024). "Taylor Swift deepfakes nudge EU to get real about AI". Politico. Archived from the original on February 7, 2024. Retrieved February 7, 2024.
  19. Stokel-Walker, Chris (January 25, 2024). "The explicit AI-created images of Taylor Swift flooding the internet highlight a major problem with generative AI". Fast Company . Archived from the original on January 26, 2024. Retrieved January 26, 2024.
  20. Hadavas, Chloe (March 11, 2020). "This Is What's Wrong With Twitter's New "Manipulated Media" Label". Slate . ISSN   1091-2339. Archived from the original on February 26, 2024. Retrieved February 26, 2024.
  21. Ghaffary, Shirin (February 4, 2020). "Twitter is finally fighting back against deepfakes and other deceptive media". Vox . Archived from the original on February 26, 2024. Retrieved February 26, 2024.
  22. Saner, Emine (January 31, 2024). "Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'". The Guardian . ISSN   0261-3077. Archived from the original on February 27, 2024. Retrieved February 26, 2024.
  23. "X pauses some Taylor Swift searches as deepfake explicit images spread". The Financial Post. 2024-01-29. Retrieved 2025-01-16.{{cite news}}: CS1 maint: url-status (link)
  24. Rosenzweig-Ziff, Dan. "AI deepfakes of Taylor Swift spread on X. Here's what to know". The Washington Post . Archived from the original on January 30, 2024. Retrieved February 7, 2024.
  25. "Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy". NBC News . NBC News. March 27, 2023. Archived from the original on November 29, 2023. Retrieved November 30, 2023.
  26. Volkering, Sam (6 February 2024). "The Taylor Swift Deepfake Scandal Will Change AI as We Know It". Brownstone Research. Archived from the original on February 7, 2024. Retrieved February 7, 2024.
  27. 1 2 Donegan, Moira (2023-03-13). "Demand for deepfake pornography is exploding. We aren't ready for this assault on consent". The Guardian. ISSN   0261-3077 . Retrieved 2024-11-18.
  28. "'Dehumanising': How have AI deepfakes been used to target women?". euronews. 2023-12-11. Retrieved 2024-11-17.
  29. 1 2 Laffier, Jennifer; Rehman, Aalyia (2023-06-24). "Deepfakes and Harm to Women". Journal of Digital Life and Learning. 3 (1): 1–21. doi:10.51357/jdll.v3i1.218. ISSN   2564-3185.
  30. Mahdawi, Arwa (2023-04-01). "Nonconsensual deepfake porn is an emergency that is ruining lives". The Guardian. ISSN   0261-3077 . Retrieved 2024-11-17.