Virtual actor

Last updated

A virtual actor or also known as virtual human, virtual persona, or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound, that is often indistinguishable from the real actor.

Contents

The idea of a virtual actor was first portrayed in the 1981 film Looker , wherein models had their bodies scanned digitally to create 3D computer generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used this concept: Fools by Pat Cadigan, and Et Tu, Babe by Mark Leyner.

In general, virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or "silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton, Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and George Burns. [1] [2]

By 2002, Arnold Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and David Duchovny had all had their heads laser scanned to create digital computer models thereof. [1]

Early history

Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick Jagger's song "Hard Woman" (from She's the Boss ). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 film "Rendez-vous in Montreal" created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands. [3]

In 1987, the Kleiser-Walczak Construction Company (now Synthespian Studios), founded by Jeff Kleiser and Diana Walczak coined the term "synthespian" and began its Synthespian ("synthetic thespian") Project, with the aim of creating "life-like figures based on the digital animation of clay models". [2] [4]

In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and performed live at SIGGRAPH. In 1989, The Abyss , directed by James Cameron included a computer-generated face placed onto a watery pseudopod. [3] [5]

In 1991, Terminator 2: Judgment Day , also directed by Cameron, confident in the abilities of computer-generated effects from his experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2: Judgment Day contained over forty shots throughout the film. [3] [5] [6]

In 1997, Industrial Light & Magic worked on creating a virtual actor that was a composite of the bodily parts of several real actors. [2]

21st century

By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans had been used in Final Fantasy: The Spirits Within , and by 2004, a synthetic Laurence Olivier co-starred in Sky Captain and the World of Tomorrow . [7] [8]

Star Wars

Since the mid-2010s, the Star Wars franchise has become particularly notable for its prominent usage of virtual actors, driven by a desire in recent entries to reuse characters that first appeared in the original trilogy during the late 1970s and early 1980s.

The 2016 Star Wars Anthology film Rogue One: A Star Wars Story is a direct prequel to the 1977 film Star Wars: A New Hope , with the ending scene of Rogue One leading almost immediately into the opening scene of A New Hope. As such, Rogue One necessitated digital recreations of Peter Cushing in the role Grand Moff Tarkin (played and voiced by Guy Henry), and Carrie Fisher as Princess Leia (played by Ingvild Deila), appearing the same as they did in A New Hope. Fisher's sole spoken line near the end of Rogue One was added using archival voice footage of her saying the word "hope". Cushing had died in 1994, while Fisher was not available to play Leia during production and died a few days after the film's release. Industrial Light & Magic created the special effects. [9]

Similarly, the 2020 second season of The Mandalorian briefly featured a digital recreation of Mark Hamill's character Luke Skywalker (played by an uncredited body double and voiced by an audio deepfake recreation of Hamill's voice[ citation needed ]) as portrayed in the 1983 film Return of the Jedi . Canonically, The Mandalorian's storyline takes place roughly five years after the events of Return of the Jedi.

Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was supposedly preserving: our point of contact with the irreplaceable, finite person". Even more problematic are the issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the United States, for instance, they must resort to database protection laws in order to exercise what control they have (The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor does not own the copyright on their digital clones, unless the clones were created by them. Robert Patrick, for example, would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2: Judgment Day. [7] [10]

The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs. It is also a career difficulty, since a clone could be used in roles that a real actor would not accept for various reasons. Both Tom Waits and Bette Midler have won actions for damages against people who employed their images in advertisements that they had refused to take part in themselves. [11]

In the USA, the use of a digital clone in advertisements is required to be accurate and truthful (section 43(a) of the Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied endorsement. The United States District Court for the Southern District of New York held that an advertisement employing a Woody Allen impersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product. [11]

Other concerns include posthumous use of digital clones. Even before Brandon Lee was digitally reanimated, the California Senate drew up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich [12] and Vincent Price. [2]

In fiction

See also

Related Research Articles

<span class="mw-page-title-main">Computer animation</span> Art of creating moving images using computers

Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.

Visual effects is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action footage or CGI elements to create realistic imagery is called VFX.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In films, television shows and video games, motion capture refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

<span class="mw-page-title-main">Skeletal animation</span> Computer animation technique

Skeletal animation or rigging is a technique in computer animation in which a character is represented in two parts: a polygonal or parametric mesh representation of the surface of the object, and a hierarchical set of interconnected parts, a virtual armature used to animate the mesh. While this technique is often used to animate humans and other organic figures, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of "bones" may not be hierarchical or interconnected, but simply represent a higher-level description of the motion of the part of mesh it is influencing.

<span class="mw-page-title-main">Virtual cinematography</span> Also referred to as CGI

Virtual cinematography is the set of cinematographic techniques performed in a computer graphics environment. It includes a wide variety of subjects like photographing real objects, often with stereo or multi-camera setup, for the purpose of recreating them as three-dimensional objects and algorithms for the automated creation of real and simulated camera angles. Virtual cinematography can be used to shoot scenes from otherwise impossible camera angles, create the photography of animated films, and manipulate the appearance of computer-generated effects.

Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real-time by computers. It is most commonly used in filmmaking and television production but has also been used in interactive theme park attractions and live theatre.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

Previsualization is the visualizing of scenes or sequences in a movie before filming. It is a concept used in other creative arts, including animation, performing arts, video game design, and still photography. Previsualization typically describes techniques like storyboarding, which uses hand-drawn or digitally-assisted sketches to plan or conceptualize movie scenes.

<span class="mw-page-title-main">Digital Effects (studio)</span> Computer animation studio

Digital Effects Inc. was an early and innovative computer animation studio at 321 West 44th street in New York City. It was the first computer graphics house in New York City when it opened in 1978, and operated until 1986. It was founded by Judson Rosebush, Jeff Kleiser, Don Leich, David Cox, Bob Hoffman, Jan Prins, and others. Many of the original group came from Syracuse University, where Rosebush taught computer graphics. Rosebush developed the animation software APL Visions and FORTRAN Visions. Kleiser later went on to found Kleiser-Walczak Construction Company, which experimented with creating synthespians and made the animation for Monsters of Grace.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

<span class="mw-page-title-main">Adam Powers, The Juggler</span> Computer generated character

Adam Powers, The Juggler is a 1981 computer animation created by Richard Taylor and Gary Demos and released by Information International Inc.. It was one of the earliest CGI-animated anthropomorphic characters ever. The character was motion captured from Ken Rosenthal, a real juggler.

The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic 3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.

<span class="mw-page-title-main">Computer-generated imagery</span> Application of computer graphics to create or contribute to images

Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.

<span class="mw-page-title-main">Nadia Magnenat Thalmann</span> Computer scientist

Nadia Magnenat Thalmann is a computer graphics scientist and robotician and is the founder and head of MIRALab at the University of Geneva. She has chaired the Institute for Media Innovation at Nanyang Technological University (NTU), Singapore from 2009 to 2021.

<span class="mw-page-title-main">Daniel Thalmann</span> Swiss and Canadian computer scientist

Prof. Daniel Thalmann is a Swiss and Canadian computer scientist and a pioneer in Virtual humans. He is currently Honorary Professor at EPFL, Switzerland and Director of Research Development at MIRALab Sarl in Geneva, Switzerland.

Twixt was a 3D computer animation system originally created in 1984 by Julian Gomez at Sun Microsystems. It featured keyframes and tweening in a track-based graphical interface, and was capable of real-time wireframe playback. An Apple Macintosh port, called MacTwixt, was the first known 3D animation software to be released for the Macintosh. It was used by Apple's Advanced Technology Group to create the 1988 short film Pencil Test. Twixt was maintained until 1987 by Cranston/Csuri Productions, and used in their animated television and advertising projects.

Rendez-vous in Montreal is a 1987 animated film that used advanced computer techniques to achieve such effects as modelling the film stars Marilyn Monroe and Humphrey Bogart. The film was directed by Nadia Magnenat Thalmann and Daniel Thalmann and produced with a team of 10 people. Specific interactive software [1] was developed that allowed designers to interactively use commands to generate the sequences. The main purpose of Rendez-vous in Montreal was to show that true synthetic actors can be created. This film represented a technological breakthrough both on the software side and the film itself.

Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

<span class="mw-page-title-main">Virtual human</span> Computer simulation of a person

A virtual human is a software fictional character or human being. Virtual humans have been created as tools and artificial companions in simulation, video games, film production, human factors and ergonomic and usability studies in various industries, clothing industry, telecommunications (avatars), medicine, etc. These applications require domain-dependent simulation fidelity. A medical application might require an exact simulation of specific internal organs; film industry requires highest aesthetic standards, natural movements, and facial expressions; ergonomic studies require faithful body proportions for a particular population segment and realistic locomotion with constraints, etc.

References

  1. 1 2 Brooks Landon (2002). "Synthespians, Virtual Humans, and Hypermedia". In Veronica Hollinger and Joan Gordon (ed.). Edging Into the Future: Science Fiction and Contemporary Cultural Transformation. University of Pennsylvania Press. pp. 57–59. ISBN   0-8122-1804-3.
  2. 1 2 3 4 Barbara Creed (2002). "The Cyberstar". In Graeme Turner (ed.). The Film Cultures Reader. Routledge. ISBN   0-415-25281-4.
  3. 1 2 3 Nadia Magnenat-Thalmann and Daniel Thalmann (2004). Handbook of Virtual Humans. John Wiley and Sons. pp. 6–7. ISBN   0-470-02316-3.
  4. "About | Welcome to Synthespian Studios". Archived from the original on 11 May 2014. Retrieved 26 July 2014.
  5. 1 2 Paul Martin Lester (2005). Visual Communication: Images With Messages. Thomson Wadsworth. p. 353. ISBN   0-534-63720-5.
  6. Andrew Darley (2000). "The Waning of Narrative". Visual Digital Culture: Surface Play and Spectacle in New Media Genres. Routledge. p. 109. ISBN   0-415-16554-7.
  7. 1 2 Ralf Remshardt (2006). "The actor as imtermedialist: remetiation, appropriation, adaptation". In Freda Chapple and Chiel Kattenbelt (ed.). Intermediality in Theatre and Performance. Rodopi. pp. 52–53. ISBN   90-420-1629-9.
  8. Simon Danaher (2004). Digital 3D Design. Thomson Course Technology. p. 38. ISBN   1-59200-391-5.
  9. Itzkoff, Dave (27 December 2016). "How 'Rogue One' Brought Back Familiar Faces". The New York Times. ISSN   0362-4331 . Retrieved 27 September 2019.
  10. Laikwan Pang (2006). "Expressions, originality, and fixation". Cultural Control And Globalization in Asia: Copyright, Piracy, and Cinema. Routledge. p. 20. ISBN   0-415-35201-0.
  11. 1 2 Michael A. Einhorn (2004). "Publicity rights and consumer rights". Media, Technology, and Copyright: Integrating Law and Economics. Edward Elgar Publishing. pp. 121, 125. ISBN   1-84376-657-4.
  12. Los Angeles Times / Digital Elite Inc.

Further reading