Digital cloning

Last updated

Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. [1] One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. [2] Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.

Contents

Digital cloning can be categorized into audio-visual (AV), memory, personality, and consumer behaviour cloning. [3] In AV cloning, the creation of a cloned digital version of the digital or non-digital original can be used, for example, to create a fake image, an avatar, or a fake video or audio of a person that cannot be easily differentiated from the real person it is purported to represent. A memory and personality clone like a mindclone is essentially a digital copy of a person’s mind. A consumer behavior clone is a profile or cluster of customers based on demographics.

Truby and Brown coined the term “digital thought clone” to refer to the evolution of digital cloning into a more advanced personalized digital clone that consists of “a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.” [3]

Digital cloning first became popular in the entertainment industry. The idea of digital clones originated from movie companies creating virtual actors of actors who have died. When actors die during a movie production, a digital clone of the actor can be synthesized using past footage, photos, and voice recordings to mimic the real person in order to continue the movie production. [4]

Modern artificial intelligence, has allowed for the creation of deepfakes. This involves manipulation of a video to the point where the person depicted in the video is saying or performing actions he or she may not have consented to. [5] In April 2018, BuzzFeed released a deepfake video of Jordan Peele, which was manipulated to depict former President, Barack Obama, making statements he has previously not made in public to warn the public against the potential dangers of deepfakes. [6]

In addition to deepfakes, companies such as Intellitar now allows one to easily create a digital clone of themselves by feeding a series of images and voice recordings. This essentially creates digital immortality, allowing loved ones to interact with representations of those who died. [7] Digital cloning not only allows one to digitally memorialize their loved ones, but they can also be used to create representations of historical figures and be used in an educational setting.

With the development of various technology, as mentioned above, there are numerous concerns that arises, including identity theft, data breaches, and other ethical concerns. One of the issues with digital cloning is that there are little to no legislations to protect potential victims against these possible problems. [8]

Technology

Intelligent Avatar Platforms (IAP)

Intelligent Avatar Platform (IAP) can be defined as an online platform supported by artificial intelligence that allows one to create a clone of themselves. [7] The individual must train his or her clone to act and speak like themselves by feeding the algorithm numerous voice recordings and videos of themselves. [9] Essentially, the platforms are marketed as a place where one 'lives eternally', as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attain digital immortality, along with maintaining a family tree and legacy for generations following to see. [7]

Some examples of IAP include Intellitar and Eterni.me. Although most of these companies are still in its developing stages, they all are trying to achieve the same goal of allowing the user to create an exact duplicate of themselves to store every memory they have in their mind into the cyberspace. [7] Some include a free version, which only allows the user to choose their avatar from a given set of images and audio. However, with the premium setting, these companies will ask the user to upload photos, videos, and audio recordings of one to form a realistic version of themselves. [10] Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn the cognition of the original person and apply that to the clone. Intellitar closed down in 2012 because of intellectual property battles over the technology it used. [11]

Potential concerns with IAP includes the potential data breaches and not getting consent of the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages. [9] In addition to the risk of personal privacy being compromised, there is also the risk of violating the privacy of the deceased. Although one can give consent to creating a digital clone of themselves before his or her physical death, they are unable to give consent to the actions the digital clone may take.

Deepfakes

As described earlier, deepfakes is a form of video manipulation where one can change the people present by feeding various images of a specific person they want. Furthermore, one can also change the voice and words the person in the video says by simply submitting series of voice recordings of the new person lasting about one or two minutes long. In 2018, a new app called FakeApp was released, allowing the public to easily access this technology to create videos. This app was also used to create the Buzzfeed video of former President Barack Obama. [6] [12] With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual. [13]

Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns. [14]

Voice cloning

Voice cloning is a case of the audio deepfake methods that uses artificial intelligence to generate a clone of a person's voice. Voice cloning involves deep learning algorithm that takes in voice recordings of an individual and can synthesize such a voice to the point where it can faithfully replicate a human voice with great accuracy of tone and likeness. [15]

Cloning a voice requires high-performance computers. Usually, the computations are done using the Graphics Processing Unit (GPU), and very often resort to the cloud computing, due to the enormous amount of calculation needed.

Audio data for training has to be fed into an artificial intelligence model. These are often original recordings that provide an example of the voice of the person concerned. Artificial intelligence can use this data to create an authentic voice, which can reproduce whatever is typed, called Text-To-Speech, or spoken, called Speech-To-Speech.

This technology worries many because of its impact on various issues, from political discourse to the rule of law. Some of the early warning signs have already appeared in the form of phone scams [16] [17] and fake videos on social media of people doing things they never did. [18]

Protections against these threats can be primarily implemented in two ways. The first is to create a way to analyze or detect the authenticity of a video. This approach will inevitably be an upside game as ever-evolving generators defeat these detectors. The second way could be to embed the creation and modification information in software or hardware. [19] [20] This would work only if the data were not editable, but the idea would be to create an inaudible watermark that would act as a source of truth. [21] In other words, we could know if the video is authentic by seeing where it was shot, produced, edited, and so on. [15]

15.ai a non-commercial freeware web application that began as a proof of concept of the democratization of voice acting and dubbing using technologygives the public access to such technology. [22] Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used [23] ), ease of use, and substantial improvements to current text-to-speech implementations have been lauded by users; [24] [25] [26] however, some critics and voice actors have questioned the legality and ethicality of leaving such technology publicly available and readily accessible. [22] [27] [28] [29]

Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such as Google and Amazon are investing vast amounts of money for the development. [30]

Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor. [31] Also, voice cloning was used to translate podcast content into different languages using the podcaster's voice. [32] Another includes those who may have lost their voice can gain back a sense of individuality by creating their voice clone by inputting recordings of them speaking before they lost their voices. [33]

On the other hand, voice cloning is also susceptible to misuse. An example of this is the voices of celebrities and public officials being cloned, and the voice may say something to provoke conflict despite the actual person has no association with what their voice said. [34]

In recognition of the threat that voice cloning poses to privacy, civility, and democratic processes, the Institutions, including the Federal Trade Commission, U.S. Department of Justice and Defense Advanced Research Projects Agency (DARPA) and the Italian Ministry of Education, University and Research (MIUR), have weighed in on various audio deepfake use cases and methods that might be used to combat them. [35] [36] [37]

Constructive uses

Education

Digital cloning can be useful in an educational setting to create a more immersive experience for students. Some students may learn better through a more interactive experience and creating deepfakes can enhance the learning ability of students. One example of this includes creating a digital clone of historical figures, such as Abraham Lincoln, to show what problems he faced during his life and how he was able to overcome them. Another example of using digital clones in an educational setting is having speakers create a digital clone of themselves. Various advocacy groups may have trouble with schedules as they are touring various schools during the year. However, by creating digital clones of themselves, their clones can present the topic at places where the group could not physically make it. These educational benefits can bring students a new way of learning as well as giving access to those who previously were not able to access resources due to environmental conditions. [13]

Arts

Although digital cloning has already been in the entertainment and arts industry for a while, artificial intelligence can greatly expand the uses of these technology in the industry. The movie-industry can create even more hyper-realistic actors and actresses who have died. Additionally, movie-industry can also create digital clones in movie scenes that may require extras, which can help cut the cost of production immensely. However, digital cloning and other technology can be beneficial for non-commercial purposes. For example, artists can be more expressive if they are looking to synthesize avatars to be part of their video production. They can also create digital avatars to draft up their work and help formulate their ideas before moving on working on the final work. [13] Actor Val Kilmer lost his voice in 2014 after a tracheotomy due to his throat cancer. However, he partnered with an AI company that produced a synthetic voice based on his previous recordings. The voice enabled Kilmer to retake his "Iceman" role from 1986 Top Gun in the 2022 sequel film Top Gun: Maverick . [38]

Digital immortality

Although digital immortality has existed for a while as social media accounts of the deceased continue to remain in cyberspace, creating a virtual clone that is immortal takes on a new meaning. With the creation of a digital clone, one can not only capture the visual presence of themselves but also their mannerism, including personality and cognition. With digital immortality, one can continue to interact with their loved ones after they have died, which can possibly end the barrier of physical death. Furthermore, families can connect with multiple generations, forming a family tree, in a sense, to pass on the family legacy to future generations, providing a way for history to be passed down. [7]

Concerns

Fake news

With a lack of regulations for deepfakes, there are several concerns that have arisen. Some concerning deepfake videos that can bring potential harm includes depiction of political officials displaying inappropriate behavior, police officers shown as shooting unarmed black men, and soldiers murdering innocent civilians may begin to appear although it may have never occurred in real life. [39] With such hyper-realistic videos being released on the Internet, it becomes very easy for the public to be misinformed, which could lead people to take actions, thus contributing to this vicious cycle of unnecessary harm. Additionally, with the rise in fake news in recent news, there is also the possibility of combining deepfakes and fake news. This will bring further difficulty to distinguishing what is real and what is fake. Visual information can be very convincing to the human eyes, therefore, the combination of deepfakes and fake news can have a detrimental effect on society. [13] Strict regulations should be made by social media companies and other platforms for news. [40]

Personal use

Another reason deepfakes can be used maliciously is for one to sabotage another on a personal level. With the increased accessibility of technologies to create deepfakes, blackmailers and thieves are able to easily extract personal information for financial gains and other reasons by creating videos of loved ones of the victim asking for help. [13] Furthermore, voice cloning can be used maliciously for criminals to make fake phone calls to victims. The phone calls will have the exact voice and mannerism as the individual, which can trick the victim into giving private information to the criminal without knowing. [41] Alternatively, a bad actor could, for example, create a deepfake of a person superimposed onto a video to extract blackmail payment and/or as an act of revenge porn.

Creating deepfakes and voice clones for personal use can be extremely difficult under the law because there is no commercial harm. Rather, they often come in the form of psychological and emotional damage, making it difficult for the court to provide a remedy for. [5]

Ethical implications

Although there are numerous legal problems that arises with the development of such technology, there are also ethical problems that may not be protected under the current legislations. One of the biggest problems that comes with the use of deepfakes and voice cloning is the potential of identity theft. However, identity theft in terms of deepfakes are difficult to prosecute because there are currently no laws that are specific to deepfakes. Furthermore, the damages that malicious use of deepfakes can bring is more of a psychological and emotional one rather than a financial one, which makes it more difficult to provide a remedy for. Allen argues that the way one’s privacy should be treated is similar to Kant’s categorical imperative. [5]

Another ethical implication is the use of private and personal information one must give up to use the technology. Because digital cloning, deepfakes, and voice cloning all use a deep-learning algorithm, the more information the algorithm receives, the better the results are. [42] However, every platform has a risk of data breach, which could potentially lead to very personal information being accessed by groups that users never consented to. Furthermore, post-mortem privacy comes into question when family members of a loved one tries to gather as much information as possible to create a digital clone of the deceased without the permission of how much information they are willing to give up. [43]

Existing laws in the United States

In the United States, copyright laws require some type of originality and creativity in order to protect the author’s individuality. However, creating a digital clone simply means taking personal data, such as photos, voice recordings, and other information in order to create a virtual person that is as close to the actual person. In the decision of Supreme Court case Feist Publications Inc. v. Rural Television Services Company, Inc., Justice O’Connor emphasized the importance of originality and some degree of creativity. However, the extent of originality and creativity is not clearly defined, creating a gray area for copyright laws. [44] Creating digital clones require not only the data of the person but also the creator’s input of how the digital clone should act or move. In Meshwerks v. Toyota, this question was raised and the court stated that the same copyright laws created for photography should be applied to digital clones. [44]

Right of publicity

With the current lack of legislations to protect individuals against potential malicious use of digital cloning, the right of publicity may be the best way to protect one in a legal setting. [4] The right of publicity, also referred to as personality rights, gives autonomy to the individual when it comes to controlling their own voice, appearance, and other aspects that essentially makes up their personality in a commercial setting. [45] If a deepfake video or digital clone of one arises without their consent, depicting the individual taking actions or making statements that are out of their personality, they can take legal actions by claiming that it is violating their right to publicity. Although the right to publicity specifically states that it is meant to protect the image of an individual in a commercial setting, which requires some type of profit, some state that the legislation may be updated to protect virtually anyone's image and personality. [46] Another important note is that the right of publicity is only implemented in specific states, so some states may have different interpretations of the right compared to other states.

Preventative measures

Regulation

Digital and digital thought clones raise legal issues relating to data privacy, informed consent, anti-discrimination, copyright, and right of publicity. More jurisdictions urgently need to enact legislation similar to the General Data Protection Regulation in Europe to protect people against unscrupulous and harmful uses of their data and the unauthorised development and use of digital thought clones. [3]

Technology

One way to prevent being a victim to any of the technology mentioned above is to develop artificial intelligence against these algorithms. There are already several companies that have developed artificial intelligence that can detect manipulated images by looking at the patterns in each pixel. [47] By applying a similar logic, they are trying to create a software that takes each frame of a given video and analyze it pixel by pixel in order to find the pattern of the original video and determine whether or not it has been manipulated. [48]

In addition to developing new technology that can detect any video manipulations, many researchers are raising the importance for private corporations creating stricter guidelines to protect individual privacy. [30] With the development of artificial intelligence, it is necessary to ask how this impacts society today as it begins to appear in virtually every aspect of society, including medicine, education, politics, and the economy. Furthermore, artificial intelligence will begin to appear in various aspects of society, which makes it important to have laws that protect humans rights as technology takes over. As the private sector gains more digital power over the public, it is important to set strict regulations and laws to prevent private corporations from using personal data maliciously. Additionally, the past history of various data breaches and violations of privacy policy should also be a warning for how personal information can be accessed and used without the person’s consent. [8]

Digital literacy

Another way to prevent being harmed by these technology is by educating people on the pros and cons of digital cloning. By doing so, it empowers each individual to make a rational decision based on their own circumstances. [49] Furthermore, it is also important to educate people on how to protect the information they put out on the Internet. By increasing the digital literacy of the public, people have a greater chance of determining whether a given video has been manipulated as they can be more skeptical of the information they find online. [30]

See also

Related Research Articles

<span class="mw-page-title-main">Privacy</span> Seclusion from unwanted attention

Privacy is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

The ethics of technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information. Technology ethics is the application of ethical thinking to the growing concerns of technology as new technologies continue to rise in prominence.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

<span class="mw-page-title-main">Virtual assistant</span> Software agent

A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.

Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

Digital immortality is the hypothetical concept of storing a person's personality in digital substrate, i.e., a computer, robot or cyberspace. The result might look like an avatar behaving, reacting, and thinking like a person on the basis of that person's digital archive. After the death of the individual, this avatar could remain static or continue to learn and self-improve autonomously.

Hany Farid is an American university professor who specializes in the analysis of digital images and the detection of digitally manipulated images such as deepfakes. Farid served as Dean and Head of School for the UC Berkeley School of Information. In addition to teaching, writing, and conducting research, Farid acts as a consultant for non-profits, government agencies, and news organizations. He is the author of the book Photo Forensics (2016).

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

<span class="mw-page-title-main">Deepfake</span> Realistic artificially generated media

Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.

<span class="mw-page-title-main">Video manipulation</span> Editing of video content for malicious intent

Video manipulation is a type of media manipulation that targets digital video using video processing and video editing techniques. The applications of these methods range from educational videos to videos aimed at (mass) manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns. Other uses are less sinister; entertainment purposes and harmless pranks provide users with movie-quality artistic possibilities.

<span class="mw-page-title-main">Artificial intelligence art</span> Machine application of knowledge of human aesthetic expressions

Artificial intelligence art is visual artwork created through the use of an artificial intelligence (AI) program.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

An audio deepfake is a product of artificial intelligence used to create convincing speech sentences that sound like specific people saying things they did not say. This technology was initially developed for various applications to improve human life. For example, it can be used to produce audiobooks, and also to help people who have lost their voices to get them back. Commercially, it has opened the door to several opportunities. This technology can also create more personalized digital assistants and natural-sounding text-to-speech as well as speech translation services.

Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.

<span class="mw-page-title-main">ELVIS Act</span> Tennessee state law to regulate AI impersonation

The ELVIS Act or Ensuring Likeness Voice and Image Security Act, signed into law by Tennessee Governor Bill Lee on March 21, 2024, marked a significant milestone in the area of regulation of artificial intelligence and public sector policies for artists in the era of artificial intelligence (AI) and AI alignment. It was noted as the first enacted legislation in the United States of America specifically designed to protect musicians from the unauthorized use of their voices through artificial intelligence technologies and against audio deepfakes and voice cloning. This legislation distinguishes itself by adding penalties for copying a performer's voice without permission, a measure that addresses the sophisticated ability of AI to mimic public figures, including artists.

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

References

  1. Floridi, Luciano (2018). "Artificial Intelligence, Deepfakes and a Future of Ectypes". Philosophy & Technology. 31 (3): 317–321. doi: 10.1007/s13347-018-0325-3 .
  2. Borel, Brooke (2018). "Clicks, Lies and Videotape". Scientific American. 319 (4): 38–43. Bibcode:2018SciAm.319d..38B. doi:10.1038/scientificamerican1018-38. PMID   30273328. S2CID   52902634.
  3. 1 2 3 Truby, Jon; Brown, Rafael (2021). "Human digital thought clones: the Holy Grail of artificial intelligence for big data". Information and Communication Technology Law. 30 (2): 140–168. doi:10.1080/13600834.2020.1850174. hdl: 10576/17266 . S2CID   229442428. CC-BY icon.svg Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  4. 1 2 Beard, Joseph (2001). "CLONES, BONES AND TWILIGHT ZONES: Protecting the Digital Persona of the Quick, the Dead and the Imaginary". Berkeley Technology Law Journal. 16 (3): 1165–1271. JSTOR   24116971.
  5. 1 2 3 Allen, Anita (2016). "Protecting One's Own Privacy In a Big Data Economy". Harvard Law Review. 130 (2): 71–86.
  6. 1 2 Silverman, Craig (April 2018). "How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video". Buzzfeed.
  7. 1 2 3 4 5 Meese, James (2015). "Posthumous Personhood and the Affordances of Digital Media". Mortality. 20 (4): 408–420. doi:10.1080/13576275.2015.1083724. hdl: 10453/69288 . S2CID   147550696.
  8. 1 2 Nemitz, Paul Friedrich (2018). "Constitutional Democracy and Technology in the Age of Artificial Intelligence". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 59 (9): 20180089. Bibcode:2018RSPTA.37680089N. doi: 10.1098/rsta.2018.0089 . PMID   30323003.
  9. 1 2 Michalik, Lyndsay (2013). "'Haunting Fragments': Digital Mourning and Intermedia Performance". Theatre Annual. 66 (1): 41–64. ProQuest   1501743129.
  10. Ursache, Marius. "Eternime".
  11. "This start-up promised 10,000 people eternal digital life—then it died | Fusion". Archived from the original on December 30, 2016. Retrieved May 24, 2020.
  12. S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, and H. Li, “Protecting world leaders against deep fakes,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
  13. 1 2 3 4 5 Chesney, Robert (2018). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security" (PDF). SSRN Electronic Journal. 26 (1): 1–58.
  14. Suwajanakorn, Supasorn (2017). "Synthesizing Obama" (PDF). ACM Transactions on Graphics. 36 (4): 1–13. doi:10.1145/3072959.3073640. S2CID   207586187.
  15. 1 2 "Everything You Need To Know About Deepfake Voice". Veritone. October 1, 2021. Retrieved June 30, 2022.
  16. David, Dominic. "Council Post: Analyzing The Rise Of Deepfake Voice Technology". Forbes. Retrieved June 30, 2022.
  17. Brewster, Thomas. "Fraudsters Cloned Company Director's Voice In $35 Million Bank Heist, Police Find". Forbes. Retrieved June 29, 2022.
  18. Ortutay, Matt O'Brien and Barbara. "Why the Anthony Bourdain voice cloning in documentary 'Roadrunner' creeps people out". USA TODAY. Retrieved June 30, 2022.
  19. Rashid, Md Mamunur; Lee, Suk-Hwan; Kwon, Ki-Ryong (2021). "Blockchain Technology for Combating Deepfake and Protect Video/Image Integrity". Journal of Korea Multimedia Society. 24 (8): 1044–1058. doi:10.9717/kmms.2021.24.8.1044. ISSN   1229-7771.
  20. Fraga-Lamas, Paula; Fernández-Caramés, Tiago M. (October 20, 2019). "Fake News, Disinformation, and Deepfakes: Leveraging Distributed Ledger Technologies and Blockchain to Combat Digital Deception and Counterfeit Reality". IT Professional. 22 (2): 53–59. arXiv: 1904.05386 . doi:10.1109/MITP.2020.2977589.
  21. Ki Chan, Christopher Chun; Kumar, Vimal; Delaney, Steven; Gochoo, Munkhjargal (September 2020). "Combating Deepfakes: Multi-LSTM and Blockchain as Proof of Authenticity for Digital Media". 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G). pp. 55–62. doi:10.1109/AI4G50087.2020.9311067. ISBN   978-1-7281-7031-2. S2CID   231618774.
  22. 1 2 Ng, Andrew (April 1, 2020). "Voice Cloning for the Masses". The Batch. Archived from the original on August 7, 2020. Retrieved April 5, 2020.
  23. "15.ai - FAQ". 15.ai. January 18, 2021. Archived from the original on March 3, 2020. Retrieved January 18, 2021.
  24. Ruppert, Liana (January 18, 2021). "Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App". Game Informer . Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  25. Zwiezen, Zack (January 18, 2021). "Website Lets You Make GLaDOS Say Whatever You Want". Kotaku . Archived from the original on January 17, 2021. Retrieved January 18, 2021.
  26. Clayton, Natalie (January 19, 2021). "Make the cast of TF2 recite old memes with this AI text-to-speech tool". PC Gamer . Archived from the original on January 19, 2021. Retrieved January 19, 2021.
  27. Ng, Andrew (March 7, 2021). "Weekly Newsletter Issue 83". The Batch. Archived from the original on February 26, 2022. Retrieved March 7, 2021.
  28. Lopez, Ule (January 16, 2022). "Troy Baker-backed NFT firm admits using voice lines taken from another service without permission". Wccftech. Archived from the original on January 16, 2022. Retrieved June 7, 2022.
  29. Henry, Joseph (January 18, 2022). "Troy Baker's Partner NFT Company Voiceverse Reportedly Steals Voice Lines From 15.ai". Tech Times. Archived from the original on January 26, 2022. Retrieved February 14, 2022.
  30. 1 2 3 Brayne, Sarah (2018). "Visual Data and the Law". Law & Social Inquiry. 43 (4): 1149–1163. doi:10.1111/lsi.12373. PMC   10857868 . PMID   38343725. S2CID   150076575.
  31. Chadha, Anupama; Kumar, Vaibhav; Kashyap, Sonu; Gupta, Mayank (2021), Singh, Pradeep Kumar; Wierzchoń, Sławomir T.; Tanwar, Sudeep; Ganzha, Maria (eds.), "Deepfake: An Overview", Proceedings of Second International Conference on Computing, Communications, and Cyber-Security, Lecture Notes in Networks and Systems, vol. 203, Singapore: Springer Singapore, pp. 557–566, doi:10.1007/978-981-16-0733-2_39, ISBN   978-981-16-0732-5, S2CID   236666289 , retrieved June 29, 2022
  32. Barletta, Bryan (October 12, 2021). "Sounds Profitable en Español!". adtech.podnews.net. Retrieved June 30, 2022.
  33. Etienne, Vanessa (August 19, 2021). "Val Kilmer Gets His Voice Back After Throat Cancer Battle Using AI Technology: Hear the Results". PEOPLE.com. Retrieved June 30, 2022.
  34. Fletcher, John (2018). "Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance". Theatre Journal. 70 (4): 455–71. doi:10.1353/tj.2018.0097. S2CID   191988083.
  35. "Voice cloning experts cover crime, positive use cases, and safeguards". January 29, 2020.
  36. "PREMIER - Project". sites.google.com. Retrieved June 29, 2022.
  37. "DARPA Announces Research Teams Selected to Semantic Forensics Program". www.darpa.mil. Retrieved July 1, 2022.
  38. Taylor, Chloe (May 27, 2022). "How AI brought Val Kilmer's 'Ice Man' back into Top Gun: Maverick". Fortune. Retrieved July 5, 2022.
  39. Chesney, Robert (2019). "Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics". Foreign Affairs. 98 (1): 147–55. Archived from the original on April 23, 2019. Retrieved April 25, 2019.
  40. Hall, Kathleen (2018). "Deepfake Videos: When Seeing Isn't Believing". Catholic University Journal of Law and Technology. 27 (1): 51–75.
  41. Poudel, Sawrpool (2016). "Internet of Things: Underlying Technologies, Interoperability, and Threats to Privacy and Security". Berkeley Technology Law Review. 31 (2): 997–1021. Archived from the original on April 30, 2017. Retrieved April 25, 2019.
  42. Dang, L. Ming (2018). "Deep Learning Based Computer Generated Face Identification Using Convolutional Neural Network". Applied Sciences. 8 (12): 2610. doi: 10.3390/app8122610 .
  43. Savin-Baden, Maggi (2018). "Digital Immortality and Virtual Humans" (PDF). Postdigital Science and Education. 1 (1): 87–103. doi: 10.1007/s42438-018-0007-6 .
  44. 1 2 Newell, Bryce Clayton (2010). "Independent Creation and Originality in the Age of Imitated Reality: A Comparative Analysis of Copyright and Database Protection for Digital Models of Real People". Brigham Young University International Law & Management. 6 (2): 93–126.
  45. Goering, Kevin (2018). "New York Right of Publicity: Reimagining Privacy and the First Amendment in the Digital Age - AELJ Spring Symposium 2". SSRN Electronic Journal. 36 (3): 601–635.
  46. Harris, Douglas (2019). "Deepfakes: False Pornography Is Here and the Law Cannot Protect You". Duke Law and Technology Review. 17: 99–127.
  47. Bass, Harvey (1998). "A Paradigm for the Authentication of Photographic Evidence in the Digital Age". Thomas Jefferson Law Review. 20 (2): 303–322.
  48. Wen, Jie (2012). "A Malicious Behavior Analysis Based Cyber-I Birth". Journal of Intelligent Manufacturing. 25 (1): 147–55. doi:10.1007/s10845-012-0681-2. S2CID   26225937. ProQuest   1490889853.
  49. Maras, Marie-Helen (2018). "Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos". The International Journal of Evidence & Proof. 23 (1): 255–262. doi:10.1177/1365712718807226. S2CID   150336977.