Part of a series on |
Artificial intelligence |
---|
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real. [1]
Deepfake pornography was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material (including both images and videos) of a person's face, and then using a deep learning model to train a Generative Adversarial Network to create a fake video that convincingly swaps the face of the source material onto the body of a pornographic performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process. [2]
Deepfake pornography is sometimes confused with fake nude photography, but the two are mostly different. Fake nude photography typically uses non-sexual images and merely makes it appear that the people in them are nude.
Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson. [3] Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable. [4]
In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had doctored pornographic video of her circulated online. [5] In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case. [6]
In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans. [7] [8]
In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram. [9] [10] [11] One tweet with the images was viewed over 45 million times before being removed. [12] [10] A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters. [13] [14] After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes. [15] Searches for Swift's name were temporarily disabled on X, returning an error message instead. [16] Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community. [17] [18]
A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." [15] [19]
The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre, [20] Microsoft CEO Satya Nadella, [21] the Rape, Abuse & Incest National Network, [22] and SAG-AFTRA. [23] Several US politicians called for federal legislation against deepfake pornography. [24] Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually. [25]
It emerged in South Korea in August 2024 that many teachers and female students were victims of deepfake images created by users who utilized AI technology. Journalist Ko Narin of Hankyoreh uncovered the deepfake images through Telegram chats. [26] [27] [28] On Telegram, group chats were created specifically for image-based sexual abuse of women, including middle and high school students, teachers, and even family members. Women with photos on social media platforms like KakaoTalk, Instagram, and Facebook are often targeted as well. Perpetrators use AI bots to generate fake images, which are then sold or widely shared, along with the victims’ social media accounts, phone numbers, and KakaoTalk usernames. One Telegram group reportedly drew around 220,000 members, according to a Guardian report.
Investigations revealed numerous chat groups on Telegram where users, mainly teenagers, create and share explicit deepfake images of classmates and teachers. The issue came in the wake of a troubling history of digital sex crimes, notably the notorious Nth Room case in 2019. The Korean Teachers Union estimated that more than 200 schools had been affected by these incidents. Activists called for a "national emergency" declaration to address the problem. [29] South Korean police reported over 800 deepfake sex crime cases by the end of September 2024, a stark rise from just 156 cases in 2021, with most victims and offenders being teenagers. [30]
On September 21, 6,000 people gathered at Marronnier Park in northeastern Seoul to demand stronger legal action against deepfake crimes targeting women. [31] On September 26, following widespread outrage over the Telegram scandal, South Korean lawmakers passed a bill criminalizing the possession or viewing of sexually explicit deepfake images and videos, imposing penalties that include prison terms and fines. Under the new law, those caught buying, saving, or watching such material could face up to three years in prison or fines up to 30 million won ($22,600). At the time the bill was proposed, creating sexually explicit deepfakes for distribution carried a maximum penalty of five years, but the new legislation would increase this to seven years, regardless of intent. [30]
By October 2024 it was estimated that "nudify" deep fake bots on Telegram were up to four million monthly users. [32] [33]
Deepfake technology has made the creation of child sexual abuse material (CSAM), also often referenced to as child pornography, faster, safer and easier than it has ever been. Deepfakes can be used to produce new CSAM from already existing material or creating CSAM from children who have not been subjected to sexual abuse. Deepfake CSAM can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying. [34]
While both deepfake pornography and generative AI pornography utilize synthetic media, they differ in approach and ethical implications. [35] Generative AI pornography is created entirely through algorithms, producing hyper-realistic content unlinked to real individuals. [36] [37] In contrast, Deepfake pornography alters existing footage of real individuals, often without consent, by superimposing faces or modifying scenes. [38] [39] Hany Farid, a digital image analysis expert, has emphasized these distinctions. [40]
Most deepfake pornography is made using the faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content." [41] Oftentimes, deepfake pornography is used to humiliate and harass primarily women in ways similar to revenge porn.
Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes. [1]
Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake.
As of 2023, there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws. [42]
Victims of deepfake pornography often have claims for revenge porn, tort claims, and harassment. [43] The legal consequences for revenge porn vary from state to state and country to country. [43] [44] For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, [45] whereas in Malta it is a fine of up to €5,000. [46]
The "Deepfake Accountability Act" was introduced to the United States Congress in 2019 but died in 2020. [47] It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. [44] A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humiliate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023. [48]
In the United Kingdom, the Law Commission for England and Wales recommended reform to criminalise sharing of deepfake pornography in 2022. [49] In 2023, the government announced amendments to the Online Safety Bill to that end. The Online Safety Act 2023 amends the Sexual Offences Act 2003 to criminalise sharing intimate images that shows or "appears to show" another (thus including deepfake images) without consent. [50] In 2024, the Government announced that an offence criminalising the production of deepfake pornographic images would be included in the Criminal Justice Bill of 2024. [51] [52] The Bill did not pass before Parliament was dissolved before the general election.
While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform, and removal by reporting the content to search engines. [53]
Several major online platforms have taken steps to ban deepfake pornography. As of 2018, gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms. [54] [55] In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results. [56]
Rape pornography is a subgenre of pornography involving the description or depiction of rape. Such pornography either involves simulated rape, wherein sexually consenting adults feign rape, or it involves actual rape. Victims of actual rape may be coerced to feign consent such that the pornography produced deceptively appears as simulated rape or non-rape pornography. The depiction of rape in non-pornographic media is not considered rape pornography. Simulated scenes of rape and other forms of sexual violence have appeared in mainstream cinema, including rape and revenge films, almost since its advent.
Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .
A media prank is a type of media event, perpetrated by staged speeches, activities, or press releases, designed to trick legitimate journalists into publishing erroneous or misleading articles. The term may also refer to such stories if planted by fake journalists, as well as the false story thereby published. A media prank is a form of culture jamming generally done as performance art or a practical joke for purposes of a humorous critique of mass media.
The Internet Watch Foundation (IWF) is a global registered charity based in Cambridge, England. It states that its remit is "to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the world and non-photographic child sexual abuse images hosted in the UK." Content inciting racial hatred was removed from the IWF's remit after a police website was set up for the purpose in April 2011. The IWF used to also take reports of criminally obscene adult content hosted in the UK. This was removed from the IWF's remit in 2017. As part of its function, the IWF says that it will "supply partners with an accurate and current URL list to enable blocking of child sexual abuse content". It has "an excellent and responsive national Hotline reporting service" for receiving reports from the public. In addition to receiving referrals from the public, its agents also proactively search the open web and deep web to identify child sexual abuse images and videos. It can then ask service providers to take down the websites containing the images or to block them if they fall outside UK jurisdiction.
Amateur pornography is a category of pornography that features models, actors or non-professionals performing without pay, or actors for whom this material is not their only paid modeling work. Reality pornography is professionally made pornography that seeks to emulate the style of amateur pornography. Amateur pornography has been called one of the most profitable and long-lasting genres of pornography.
Legal frameworks around fictional pornography depicting minors vary depending on country and nature of the material involved. Laws against production, distribution, and consumption of child pornography generally separate images into three categories: real, pseudo, and virtual. Pseudo-photographic child pornography is produced by digitally manipulating non-sexual images of real minors to make pornographic material. Virtual child pornography depicts purely fictional characters. "Fictional pornography depicting minors," as covered in this article, includes these latter two categories, whose legalities vary by jurisdiction, and often differ with each other and with the legality of real child pornography.
In the United States, child pornography is illegal under federal law and in all states and is punishable by up to life imprisonment and fines of up to $250,000. U.S. laws regarding child pornography are virtually always enforced and amongst the sternest in the world. The Supreme Court of the United States has found child pornography to be outside the protections of the First Amendment to the United States Constitution. Federal sentencing guidelines on child pornography differentiate between production, distribution, and purchasing/receiving, and also include variations in severity based on the age of the child involved in the materials, with significant increases in penalties when the offense involves a prepubescent child or a child under the age of 18. U.S. law distinguishes between pornographic images of an actual minor, realistic images that are not of an actual minor, and non-realistic images such as drawings. The latter two categories are legally protected unless found to be obscene, whereas the first does not require a finding of obscenity.
Imgur is an American online image sharing and image hosting service with a focus on social gossip that was founded by Alan Schaaf in 2009. The service has hosted viral images and memes, particularly those posted on Reddit.
PhotoDNA is a proprietary image-identification and content filtering technology widely used by online service providers.
Revenge porn is the distribution of sexually explicit images or videos of individuals without their consent, with the punitive intention to create public humiliation or character assassination out of revenge against the victim. The material may have been made by an ex-partner from an intimate relationship with the knowledge and consent of the subject at the time, or it may have been made without their knowledge. The subject may have experienced sexual violence during the recording of the material, in some cases facilitated by psychoactive chemicals such as date rape drugs which also cause a reduced sense of pain and involvement in the sexual act, dissociative effects and amnesia.
Celeb Jihad is a website known for sharing leaked private videos and photos as well as faked ones of celebrities as a form of jihad satire. The Daily Beast describes it as a "satirical celebrity gossip website."
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.
OnlyFans is an internet content subscription service based in London, England. The service is popular with sex workers who produce pornography, but it also hosts the work of other content creators, such as physical fitness experts and musicians.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Fake nude photography is the creation of nude photographs designed to appear as genuine nudes of an individual. The motivations for the creation of these modified photographs include curiosity, sexual gratification, the stigmatization or embarrassment of the subject, and commercial gain, such as through the sale of the photographs via pornographic websites. Fakes can be created using image editing software or through machine learning. Fake images created using the latter method are called deepfakes.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X. Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.
Graphika is an American social network analysis company known for tracking online disinformation. It was established in 2013.
As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.
Generative AI pornography or simply AI pornography refers to digitally created explicit content produced through generative artificial intelligence (AI) technologies. Unlike traditional pornography, which involves real actors and cameras, this content is synthesized entirely by AI algorithms. These algorithms, including Generative adversarial network (GANs) and text-to-image models, generate lifelike images, videos, or animations from textual descriptions or datasets.
{{citation}}
: CS1 maint: DOI inactive as of December 2024 (link)