In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X (formerly Twitter). Several artificial images of Swift of a sexual or violent nature were quickly spread, [1] with one post reported to have been seen over 47 million times before its eventual removal. [2] The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. [3] Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography. [4]
American musician Taylor Swift has reportedly been the target of misogyny and slut-shaming throughout her career. [5] [6] American technology corporation Microsoft offers AI image creators called Microsoft Designer and Bing Image Creator, which employ censorship safeguards to prevent users from generating unsafe or objectionable content. Members of a Telegram group discussed ways to circumvent these censors to create pornographic images of celebrities. [7] Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community. [8] [9]
For some, the deepfake images of Swift immediately became a source of controversy and outrage. Other internet users found them humorous and absurd, such as the image making it appear as though Swift was to engage in sexual intercourse with Oscar the Grouch. The images drew condemnations from Rape, Abuse & Incest National Network and SAG-AFTRA. The latter group, who had been following issues regarding AI-generated media prior to Swift's involvement, considered the images "upsetting, harmful and deeply concerning." [10] Microsoft CEO Satya Nadella, whose company's products were believed to be used to make these images, responded to the controversy as "alarming and terrible", further stating his belief that "we all benefit when the online world is a safe world." [11] [12] The content also sparked race-relations debates with some questioning whether it was racist to be offended by deepfaked images where Swift is appearing ready for sexual acts with the entire Kansas City Chiefs, most of whom are African American.
A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." [13] [14]
White House press secretary Karine Jean-Pierre expressed concern over the counterfeit images, deeming them "alarming", and emphasized the obligation of social media platforms to curb the dissemination of misinformation. [15] Several members of American politics called for legislation against AI-generated pornography. [16] Later in the month, a bipartisan bill was introduced by US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley. The bill would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent. [17] The European Union struck a deal in February 2024 on a similar bill that would criminalize deepfake pornography, as well as online harassment and revenge porn, by mid-2027. [18]
X responded to the sharing of these images on their own website with claims they would suspend accounts that participated in their spread. Despite this, the photos continued to be reshared among accounts of X, and spread to other platforms including Instagram and Reddit. [19] X enforces a "synthetic and manipulated media policy", which has been criticized for its efficacy. [20] [21] They briefly blocked searches of Swift's name on January 27, 2024, [22] reinstating them two days later. [23]
Fans of Taylor Swift, known as Swifties, responded to the circulation of these images by pushing the hashtag #ProtectTaylorSwift to trend on X. They also flooded other hashtags related to the images with more positive images and videos of her live performances. [24]
Deepfake pornography has remained highly controversial and has affected figures from other celebrities to ordinary people, most of whom are women. [25] Journalists have opined that the involvement of a prominent public figure such as Swift in the dissemination of AI-generated pornography could bring public awareness and political reform to the issue. [26]
With the rampant increase of deepfake pornography, questions regarding the issue of consent and privacy have emerged. Countless individuals, many of whom are women and men who have been directly affected by the nonconsensual use of deepfake pornography, are left questioning what actions, if any, can be taken to prevent this exploitation or at least remove nonconsensual content from public platforms. [27] While many states have laws and punishments in place regarding the creation or solicitation of revenge porn, only four (California, New York, Georgia, and Virginia) possess laws concerning nonconsensual deepfakes. [27] Currently, deepfakes exist in both a legal and ethical gray area when it comes to issues of consent.
Women are disproportionately targeted as victims of the making and public distribution of deepfake pornography. [28] This phenomenon puts women and female artists at risk of experiencing online violence at much higher rates than were seen in a pre-AI society. [29] Women are three times more likely to be victims of cyber violence than men and two times more likely to be victims of severe cyber abuse, which includes AI-generated revenge porn. [29] Overall, it has been reported that 96% of the deepfakes that have been created are non-consensual sexual deepfakes, and 99% of those feature women. [30]