Deepfake pornography

Last updated

Deepfake pornography, or simply fake pornography, is a type of synthetic porn that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

Contents

History

The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real. [1]

Deepfake porn was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material (including both images and videos) of a person's face, and then using a deep learning model to train a Generative Adversarial Network (GAN) to create a fake video that convincingly swaps the face of the source material onto the body of a porn performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process. [2]

DeepNude

In June 2019, a downloadable Windows and Linux application called DeepNude was released which used GAN to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50. [3] On June 27, the creators removed the application and refunded consumers, although various copies of the app, both free and for charge, continue to exist. [4] On GitHub, the open-source version of this program called "open-deepnude" was deleted. [5] The open-source version had the advantage of allowing to be trained on a larger dataset of nude images to increase the resulting nude image's accuracy level. [6]

Deepfake Telegram Bot

In July 2019 a deepfake bot service was launched on messaging app Telegram that uses AI technology to create nude images of women. The service is free and has a user-friendly interface, enabling users to submit photos and receive manipulated nude images within minutes. The service is connected to seven Telegram channels, including the main channel that hosts the bot, technical support, and image sharing channels. While the total number of users is unknown, the main channel has over 45,000 members. As of July 2020, it is estimated that approximately 24,000 manipulated images have been shared across the image sharing channels. [7]

Notable cases

Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson. [8] Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable. [9]

Rana Ayyub

In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had doctored porngraphic video of her circulated online. [10] In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case. [11]

Atrioc controversy

In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans. [12] [13]

Taylor Swift

In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram. [14] [15] [16] One tweet with the images was viewed over 45 million times before being removed. [17] [15] A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters. [18] [19] After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes. [20] Searches for Swift's name were temporarily disabled on X, returning an error message instead. [21] Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community. [22] [23]

A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." [20] [24]

The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre, [25] Microsoft CEO Satya Nadella, [26] the Rape, Abuse & Incest National Network, [27] and SAG-AFTRA. [28] Several US politicians called for federal legislation against deepfake pornography. [29] Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually. [30]

Ethical considerations

Deepfake CSAM

Deepfake technology has made the creation of child sexual abuse material (CSAM), also often referenced to as child pornography, faster, safer and easier than it has ever been. Deepfakes can be used to produce new CSAM from already existing material or creating CSAM from children who have not been subjected to sexual abuse. Deepfake CSAM can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying. [31]

Most of deepfake porn is made with faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content." [32] Oftentimes, deepfake porn is used to humiliate and harass primarily women in ways similar to revenge porn.

Combatting deepfake pornography

Technical approach

Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes. [33]

Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake.

As of 2023, there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws.

Victims of deepfake pornography often have claims for revenge porn, tort claims, and harassment. [34] The legal consequences for revenge porn vary from state to state and country to country. [35] [36] For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, [37] whereas in Malta it is a fine of up to €5,000. [38]

The "Deepfake Accountability Act" was introduced to the United States Congress in 2019 but has died in 2020. [39] It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. [36] A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humilitate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023. [40]

Controlling the distribution

While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform, and removal by reporting the content to search engines. [41]

Several major online platforms have taken steps to ban deepfake pornography. As of 2018, gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms. [42] [43] In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results. [44] It's worth noting, however, that while Pornhub has taken a stance against non-consensual content, searching for "deepfake'' on their website still yields results and they continue to run ads for deepfake websites and content. [45]

See also

Related Research Articles

<span class="mw-page-title-main">Pornography laws by region</span> Legality of pornography

Pornography laws by region vary throughout the world. The production and distribution of pornographic films are both activities that are lawful in many, but by no means in all countries, so long as the pornography features performers aged above a certain age, usually 18 years. Further restrictions are often placed on such material.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

<span class="mw-page-title-main">Pornography</span> Portrayal of sexual subject matter

Pornography has been defined as sexual subject material "such as a picture, video, or text" that is intended for sexual arousal. Intended for consumption by adults, pornography depictions have evolved from cave paintings, some forty millennia ago, to virtual reality presentations. A general distinction of adult content is made classifying it as pornography or erotica.

<span class="mw-page-title-main">Imagery of nude celebrities</span> Topic of visual depiction of nude celebrities

There has been demand for imagery of nude celebrities for many decades. It is a lucrative business exploited by websites and magazines.

Porn 2.0, named after "Web 2.0", refers to pornographic websites featuring user-generated content, including social networking features such as user-based categorizing, webcam hosting, blogs and comments. This is in contrast to the static content offered by "Web 1.0" porn sites. Porn 2.0 sites may offer features similar to mainstream Web 2.0 services such as video communities, social sites,, general blogging platforms, and photo hosting services.

Amateur pornography is a category of pornography that features models, actors or non-professionals performing without pay, or actors for whom this material is not their only paid modeling work. Reality pornography is professionally made pornography that seeks to emulate the style of amateur pornography. Amateur pornography has been called one of the most profitable and long-lasting genres of pornography.

Legal frameworks around fictional pornography depicting minors vary depending on country and nature of the material involved. Laws against production, distribution and consumption of child pornography generally separate images into three categories: real, pseudo, and virtual. Pseudo-photographic child pornography is produced by digitally manipulating non-sexual images of real minors to make pornographic material. Virtual child pornography depicts purely fictional characters. "Fictional pornography depicting minors", as covered in this article, includes these latter two categories, whose legalities vary by jurisdiction, and often differ with each other and with the legality of real child pornography.

PhotoDNA is a proprietary image-identification and content filtering technology widely used by online service providers.

Revenge porn is the distribution of sexually explicit images or videos of individuals without their consent. The material may have been made by a partner in an intimate relationship with the knowledge and consent of the subject at the time, or it may have been made without their knowledge. The subject may have experienced sexual violence during the recording of the material, in some cases facilitated by narcotics such as date rape drugs which also cause a reduced sense of pain and involvement in the sexual act, dissociative effects and amnesia. The possession of the material may be used by the perpetrators to blackmail the subjects into performing other sexual acts, to coerce them into continuing a relationship or to punish them for ending one, to silence them, to damage their reputation, and/or for financial gain. In the wake of civil lawsuits and the increasing numbers of reported incidents, legislation has been passed in a number of countries and jurisdictions to outlaw the practice, though approaches have varied and been changed over the years. The practice has also been described as a form of psychological abuse and domestic violence, as well as a form of sexual abuse.

<span class="mw-page-title-main">Pornhub</span> Pornographic video-sharing website owned by Aylo

Pornhub is a Canadian-owned internet pornography video-sharing website, one of several owned by adult entertainment conglomerate Aylo. As of February 2024, Pornhub is the 13th-most-visited website in the world and the second-most-visited adult website, after XVideos.

Celeb Jihad is a website known for sharing leaked private videos and photos as well as faked ones of celebrities as a form of jihad satire. The Daily Beast describes it as a "satirical celebrity gossip website."

Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. It can also refer to computer-generated images of human subjects that do not exist in real life. While the act of creating fake content is not new, deepfakes leverage tools and techniques from machine learning and artificial intelligence, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.

OnlyFans is an internet content subscription service based in London, United Kingdom. The service is used primarily by sex workers who produce pornography, but it also hosts the work of other content creators, such as physical fitness experts and musicians.

Fake nude photography is the creation of nude photographs designed to appear as genuine nudes of an individual. The motivations for the creation of these modified photographs include sexual gratification, the stigmatization or embarrassment of the subject, and commercial gain, such as through the sale of the photographs via pornographic websites. Fakes can be created using image editing software or through machine learning. Fake pornographic images created using the latter method are called deepfakes.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

GirlsDoPorn was an American pornographic website active from 2009 until 2020. In October and November 2019, six people involved were charged on counts of sex trafficking by force, fraud, and coercion. In December 2019, two more individuals were charged with obstruction of sex trafficking enforcement. The website was removed in January 2020 after 22 victims won the civil case against the company. According to the United States Department of Justice, the website and its sister website GirlsDoToys generated over $17 million in revenue. Videos were featured on GirlsDoPorn.com as well as pornography aggregate websites such as Pornhub, where the channel reached the top 20 most viewed, with approximately 680 million views.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

<span class="mw-page-title-main">AI boom</span> Rapid progress in artificial intelligence

The AI boom, or AI spring, is the ongoing period of rapid progress in the field of artificial intelligence (AI). Prominent examples include protein folding prediction led by Google DeepMind and generative AI led by OpenAI.

In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. These images prompted responses from anti sexual assault advocacy groups, US politicians, Swift's fans, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.

Graphika is an American social network analysis company known for tracking online disinformation. It was established in 2013.

References

  1. Gaur, Loveleen; Arora, Gursimar Kaur (2022-07-27), DeepFakes, New York: CRC Press, pp. 91–98, doi:10.1201/9781003231493-7, ISBN   978-1-003-23149-3, archived from the original on 2024-03-06, retrieved 2023-04-20
  2. Azmoodeh, Amin, and Ali Dehghantanha. "Deep Fake Detection, Deterrence and Response: Challenges and Opportunities." arXiv.org, 2022.
  3. Cole, Samantha; Maiberg, Emanuel; Koebler, Jason (26 June 2019). "This Horrifying App Undresses a Photo of Any Woman with a Single Click". Vice. Archived from the original on 2 July 2019. Retrieved 2 July 2019.
  4. Vincent, James (3 July 2019). "DeepNude AI copies easily accessible online". The Verge. Archived from the original on 8 February 2021. Retrieved 11 August 2023.
  5. Cox, Joseph (July 9, 2019). "GitHub Removed Open Source Versions of DeepNude". Vice Media. Archived from the original on September 24, 2020. Retrieved December 15, 2019.
  6. Redmon, Jennifer (July 7, 2019). "DeepNude- the AI that 'Undresses' Women- is Back. What Now?". Cisco. Archived from the original on March 1, 2023. Retrieved March 11, 2023.
  7. Hao, Karen (2020-10-20). "A deepfake bot is being used to "undress" underage girls". MIT Technology Review. Archived from the original on 2023-04-20. Retrieved 2023-04-20.
  8. Roettgers, Janko (2018-02-21). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Archived from the original on 2019-06-10. Retrieved 2023-04-20.
  9. Harwell, Drew (2018-12-31). "Scarlett Johansson on fake AI-generated sex videos: 'Nothing can stop someone from cutting and pasting my image'". The Washington Post. ISSN   0190-8286. Archived from the original on 2019-06-13. Retrieved 2023-04-20.
  10. Maddocks, Sophie (2020-06-04). "'A Deepfake Porn Plot Intended to Silence Me': exploring continuities between pornographic and 'political' deep fakes". Porn Studies. 7 (4): 415–423. doi:10.1080/23268743.2020.1757499. ISSN   2326-8743. S2CID   219910130. Archived from the original on 2024-03-06. Retrieved 2023-04-20.
  11. Ayyub, Rana (2018-11-21). "I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me". HuffPost UK. Archived from the original on 2023-04-20. Retrieved 2023-04-20.
  12. Middleton, Amber (2023-02-10). "A Twitch streamer was caught watching deepfake porn of women gamers. Sexual images made without consent can be traumatic and abusive, experts say — and women are the biggest victims". Insider. Archived from the original on 2024-03-06. Retrieved 2023-04-20.
  13. Patterson, Calum (2023-01-30). "Twitch streamer Atrioc gives tearful apology after paying for deepfakes of female streamers". Dexerto. Archived from the original on 2023-05-09. Retrieved 2023-06-14.
  14. Stokel-Walker, Chris (January 25, 2024). "The explicit AI-created images of Taylor Swift flooding the internet highlight a major problem with generative AI". Fast Company . Archived from the original on January 26, 2024. Retrieved January 26, 2024.
  15. 1 2 Belanger, Ashley (2024-01-25). "X can't stop spread of explicit, fake AI Taylor Swift images". Ars Technica . Archived from the original on 2024-01-25. Retrieved 2024-01-25.
  16. Kelly, Samantha Murphy (2024-01-25). "Explicit, AI-generated Taylor Swift images spread quickly on social media | CNN Business". CNN . Archived from the original on 2024-01-25. Retrieved 2024-01-25.
  17. Weatherbed, Jess (2024-01-25). "Trolls have flooded X with graphic Taylor Swift AI fakes". The Verge . Archived from the original on 2024-01-25. Retrieved 2024-01-25.
  18. Maiberg, Emanuel; Cole ·, Samantha (2024-01-25). "AI-Generated Taylor Swift Porn Went Viral on Twitter. Here's How It Got There". 404 Media . Archived from the original on 2024-01-25. Retrieved 2024-01-25.
  19. Belanger, Ashley (2024-01-29). "Drastic moves by X, Microsoft may not stop spread of fake Taylor Swift porn". Ars Technica. Archived from the original on 2024-01-29. Retrieved 2024-01-30.
  20. 1 2 Zhang, Cat (2024-01-26). "The Swiftie Fight to Protect Taylor Swift From AI". The Cut . Archived from the original on 2024-01-30. Retrieved 2024-03-06.
  21. Spangler, Todd (2024-01-27). "X/Twitter Blocks Searches for 'Taylor Swift' as a 'Temporary Action to Prioritize Safety' After Deluge of Explicit AI Fakes". Variety . Archived from the original on 2024-01-28. Retrieved 2024-01-29.
  22. Hsu, Tiffany (February 5, 2024). "Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says". The New York Times . Archived from the original on February 9, 2024. Retrieved February 10, 2024.
  23. Belanger, Ashley (2024-02-05). "4chan daily challenge sparked deluge of explicit AI Taylor Swift images". Ars Technica. Archived from the original on 2024-02-09. Retrieved 2024-02-09.
  24. Specter, Emma (2024-01-26). "If Anyone Can Stop the Coming AI Hellscape, It's Taylor Swift". Vogue . Archived from the original on 2024-02-06. Retrieved 2024-03-06.
  25. "Taylor Swift searches blocked on X after fake explicit images of pop singer spread". The Guardian . Reuters. 2024-01-29. Archived from the original on 2024-01-29. Retrieved 2024-01-29.
  26. Spangler, Todd (2024-01-26). "Taylor Swift Explicit AI-Generated Deepfakes Are 'Alarming and Terrible,' Microsoft CEO Says: 'We Have to Act'". Variety . Archived from the original on 2024-01-28. Retrieved 2024-01-29.
  27. Travers, Karen; Saliba, Emmanuelle (2024-01-27). "Fake explicit Taylor Swift images: White House is 'alarmed'". ABC News . Archived from the original on 2024-01-28. Retrieved 2024-01-29.
  28. Millman, Ethan (2024-01-26). "AI-Generated Explicit Taylor Swift Images 'Must Be Made Illegal,' Says SAG-AFTRA". Rolling Stone . Archived from the original on 2024-01-29. Retrieved 2024-01-29.
  29. Beaumont-Thomas, Ben (2024-01-27). "Taylor Swift deepfake pornography sparks renewed calls for US legislation". The Guardian . Archived from the original on 2024-01-29. Retrieved 2024-01-29.
  30. Montgomery, Blake (January 31, 2024). "Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes". The Guardian . Archived from the original on January 31, 2024. Retrieved January 31, 2024.
  31. Kirchengast, T (2020). "Deepfakes and image manipulation: criminalisation and control". Information & Communications Technology Law. 29 (3): 308–323. doi:10.1080/13600834.2020.1794615. S2CID   221058610.
  32. "Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy". NBC News. 2023-03-27. Archived from the original on 2023-11-29. Retrieved 2023-11-30.
  33. Gaur, Loveleen; Arora, Gursimar Kaur (2022-07-27), DeepFakes, New York: CRC Press, pp. 91–98, doi:10.1201/9781003231493-7, ISBN   978-1-003-23149-3, archived from the original on 2024-01-26, retrieved 2023-04-20
  34. "Nudify Me: The Legal Implications of AI-Generated Revenge Porn". JD Supra. Retrieved 2024-03-14.
  35. "Nudify Me: The Legal Implications of AI-Generated Revenge Porn". JD Supra. Retrieved 2024-03-14.
  36. 1 2 Kirchengast, Tyrone (2020-07-16). "Deepfakes and image manipulation: criminalisation and control". Information & Communications Technology Law. 29 (3): 308–323. doi:10.1080/13600834.2020.1794615. ISSN   1360-0834. S2CID   221058610. Archived from the original on 2024-01-26. Retrieved 2023-04-20.
  37. Branch, Legislative Services (2023-01-16). "Consolidated federal laws of Canada, Criminal Code". laws-lois.justice.gc.ca. Archived from the original on 2023-06-03. Retrieved 2023-04-20.
  38. Mania, Karolina (2022). "Legal Protection of Revenge and Deepfake Porn Victims in the European Union: Findings From a Comparative Legal Study". Trauma, Violence, & Abuse. doi:10.1177/15248380221143772. PMID   36565267. S2CID   255117036. Archived from the original on 2024-01-26. Retrieved 2023-04-20.
  39. "Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019 (2019 - H.R. 3230)". GovTrack.us. Archived from the original on 2023-12-03. Retrieved 2023-11-27.
  40. "DEEP FAKES Accountability Act (2021 - H.R. 2395)". GovTrack.us. Archived from the original on 2023-12-03. Retrieved 2023-11-27.
  41. "Un-Nudify Me: Removal Options for Deepfake Pornography Victims". JD Supra. Retrieved 2024-03-14.
  42. Kharpal, Arjun. "Reddit, Pornhub ban videos that use A.I. to superimpose a person's face over an X-rated actor". CNBC. Archived from the original on 2019-04-10. Retrieved 2023-04-20.
  43. Cole, Samantha (2018-01-31). "AI-Generated Fake Porn Makers Have Been Kicked Off Their Favorite Host". Vice. Archived from the original on 2023-04-20. Retrieved 2023-04-20.
  44. Harwell, Drew (2018-12-30). "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. ISSN   0190-8286. Archived from the original on 2019-06-14. Retrieved 2023-04-20.
  45. Cole, Samantha (2018-02-06). "Pornhub Is Banning AI-Generated Fake Porn Videos, Says They're Nonconsensual". Vice. Archived from the original on 2019-11-01. Retrieved 2019-11-09.