Virtual cinematography

Last updated

Virtual cinematography is the set of cinematographic techniques performed in a computer graphics environment. It includes a wide variety of subjects like photographing real objects, often with stereo or multi-camera setup, for the purpose of recreating them as three-dimensional objects and algorithms for the automated creation of real and simulated camera angles. Virtual cinematography can be used to shoot scenes from otherwise impossible camera angles, create the photography of animated films, and manipulate the appearance of computer-generated effects.

Contents

History

Early stages

An early example of a film integrating a virtual environment is the 1998 film, What Dreams May Come, starring Robin Williams. The film's special effects team used actual building blueprints to generate scale wireframe models that were then used to generate the virtual world. [1] The film went on to garner numerous nominations and awards including the Academy Award for Best Visual Effects and the Art Directors Guild Award for Excellence in Production Design. [2] The term "virtual cinematography" emerged in 1999 when special effects artist John Gaeta and his team wanted to name the new cinematic technologies they had created. [3]

Modern virtual cinematography

The Matrix trilogy ( The Matrix , The Matrix Reloaded , and The Matrix Revolutions ) used early Virtual Cinematography techniques to develop virtual "filming" of realistic computer-generated imagery. The result of John Gaeta and his crew at ESC Entertainment's work was the creation of photo-realistic CGI versions of the performers, sets, and actions. Their work was based on Paul Debevec et al.'s findings on the acquisition and subsequent simulation of the reflectance field over the human face acquired using the simplest of light stages in 2000. [4] Famous scenes that would have been impossible or exceedingly time-consuming to produce within the context of traditional cinematography include the burly brawl in The Matrix Reloaded (2003) where Neo fights up-to-100 Agent Smiths and the beginning of the final showdown in The Matrix Revolutions (2003), where Agent Smith's cheekbone gets punched in by Neo [5] leaving the digital look-alike unharmed.

For The Matrix trilogy, the filmmakers relied heavily on virtual cinematography to attract audiences. Bill Pope, the Director of Photography, used this tool in a much more subtle manner. Nonetheless, these scenes still managed to reach a high level of realism and made it difficult for the audience to notice that they were actually watching a shot created entirely by visual effects artists using 3D computer graphics tools. [6]

In Spider-Man 2 (2004), the filmmakers manipulated the cameras to make the audience feel as if they were swinging together with Spider-Man through New York City. Using motion capture camera radar, the cameraman moves simultaneously with the displayed animation. [7] This makes the audience experience Spider-Man's perspective and heightens the sense of reality. In Avengers: Infinity War (2018), the Titan sequence scenes were created using virtual cinematography. To make the scene more realistic, the producers decided to shoot the entire scene again with a different camera so that it would travel according to the movement of the Titan. [8] The filmmakers produced what is known as a synthetic lens flare, making the flare very akin to the originally produced footage. When the classic animated film TheLion King was remade in 2019, the producers used virtual cinematography to make a realistic animation. In the final battle scene between Scar and Simba, the cameraman again moves the camera according to the movements of the characters. [9] The goal of this technology is to further immerse the audience in the scene.

Methods

Virtual cinematography in post-production

In post-production, advanced technologies are used to modify, re-direct, and enhance scenes captured on set. Stereo or multi-camera setups photograph real objects in such a way that they can be recreated as 3D objects and algorithms. Motion capture equipment such as tracking dots and helmet cameras can be used on set to facilitate the retroactive data collection in post-production. [10]

Machine vision technology called photogrammetry uses 3D scanners to capture 3D geometry. For example, the Arius 3D scanner used for the Matrix sequels was able to acquire details like fine wrinkles and skin pores as small as 100 µm. [4]

Filmmakers have also experimented with multi-camera rigs to capture motion data without any on set motion capture equipment. For example, a markerless motion capture and multi-camera setup photogrammetric capture technique called optical flow was used to make digital look-alikes for the Matrix movies. [4]

More recently, Martin Scorsese’s crime film The Irishman utilized an entirely new facial capture system developed by Industrial Light & Magic (ILM) that used a special rig consisting of two digital cameras positioned on both sides of the main camera to capture motion data in real time with the main performances. In post-production, this data was used to digitally render computer generated versions of the actors. [11] [12]

Virtual camera rigs give cinematographers the ability to manipulate a virtual camera within a 3D world and photograph the computer-generated 3D models. Once the virtual content has been assembled into a scene within a 3D engine, the images can be creatively composed, relighted and re-photographed from other angles as if the action was happening for the first time. The virtual “filming” of this realistic CGI also allows for physically impossible camera movements such as the bullet-time scenes in The Matrix . [4]

Virtual cinematography can also be used to build complete virtual worlds from scratch. More advanced motion controllers and tablet interfaces have made such visualization techniques possible within the budget constraints of smaller film productions. [13]

On-set effects

The widespread adoption of visual effects spawned a desire to produce these effects directly on-set in ways that did not detract from the actors' performances. [14] Effects artists began to implement virtual cinematographic techniques on-set, making computer-generated elements of a given shot visible to the actors and cinematographers responsible for capturing it. [13]

Techniques such as real-time rendering, which allows an effect to be created before a scene is filmed rather than inserting it digitally afterward, utilize previously unrelated technologies including video game engines, projectors, and advanced cameras to fuse conventional cinematography with its virtual counterpart. [15] [16] [17]

The first real-time motion picture effect was developed by Industrial Light & Magic in conjunction with Epic Games, utilizing the Unreal Engine to display the classic Star Wars “light speed” effect for the 2018 film Solo: A Star Wars Story . [15] [18] The technology used for the film, dubbed “Stagecraft” by its creators, was subsequently used by ILM for various Star Wars projects as well as its parent company Disney’s 2019 photorealistic animated remake of The Lion King. [19] [20]

Rather than scanning and representing an existing image with virtual cinematographic techniques, real-time effects require minimal extra work in post-production. Shots including on-set virtual cinematography do not require any of the advanced post-production methods; the effects can be achieved using traditional CGI animation.

Software

See also

Related Research Articles

<span class="mw-page-title-main">Computer animation</span> Art of creating moving images using computers

Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.

Bullet time is a visual effect or visual impression of detaching the time and space of a camera from that of its visible subject. It is a depth enhanced simulation of variable-speed action and performance found in films, broadcast advertisements, and realtime graphics within video games and other special media. It is characterized by its extreme transformation of both time, and of space. This is almost impossible with conventional slow motion, as the physical camera would have to move implausibly fast; the concept implies that only a "virtual camera", often illustrated within the confines of a computer-generated environment such as a virtual world or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, temps mort and virtual cinematography.

Visual effects is the process by which imagery is created or manipulated outside the context of a live-action shot in filmmaking and video production. The integration of live-action footage and other live-action footage or CGI elements to create realistic imagery is called VFX.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

<span class="mw-page-title-main">Skeletal animation</span> Computer animation technique

Skeletal animation or rigging is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character and a hierarchical set of interconnected parts, a virtual armature used to animate the mesh. While this technique is often used to animate humans and other organic figures, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of "bones" may not be hierarchical or interconnected, but simply represent a higher-level description of the motion of the part of mesh it is influencing.

In visual effects, match moving is a technique that allows the insertion of 2D elements, other live action elements or CG computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. It also allows for the removal of live action elements from the live action shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to rotoscoping and photogrammetry. Match moving is sometimes confused with motion capture, which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment. Match moving is also distinct from motion control photography, which uses mechanical hardware to execute multiple identical camera moves. Match moving, by contrast, is typically a software-based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

<span class="mw-page-title-main">Paul Debevec</span> American computer graphics professional

Paul Ernest Debevec is a researcher in computer graphics at the University of Southern California's Institute for Creative Technologies. He is best known for his work in finding, capturing and synthesizing the bidirectional scattering distribution function utilizing the light stages his research team constructed to find and capture the reflectance field over the human face, high-dynamic-range imaging and image-based modeling and rendering.

Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real-time by computers. It is most commonly used in filmmaking and television production but has also been used in interactive theme park attractions and live theatre.

<span class="mw-page-title-main">Human image synthesis</span> Computer generation of human images

Human image synthesis is technology that can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work .

Previsualization is the visualizing of scenes or sequences in a movie before filming. It is a concept used in other creative arts, including animation, performing arts, video game design, and still photography. Previsualization typically describes techniques like storyboarding, which uses hand-drawn or digitally-assisted sketches to plan or conceptualize movie scenes.

<span class="mw-page-title-main">Motion graphics</span> Digital footage or animation which create the illusion of motion or rotation

Motion graphics are pieces of animation or digital footage that create the illusion of motion or rotation, and are usually combined with audio for use in multimedia projects. Motion graphics are usually displayed via electronic media technology, but may also be displayed via manual powered technology. The term distinguishes static graphics from those with a transforming appearance over time, without over-specifying the form. While any form of experimental or abstract animation can be called motion graphics, the term typically more explicitly refers to the commercial application of animation and effects to video, film, TV, and interactive applications.

<span class="mw-page-title-main">3D computer graphics</span> Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with by generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic 3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.

<span class="mw-page-title-main">Computer-generated imagery</span> Application of computer graphics to create or contribute to images

Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.

<span class="mw-page-title-main">Light stage</span> Equipment used for shape, texture, reflectance and motion capture

A light stage is an active illumination system used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.

Volumetric capture or volumetric video is a technique that captures a three-dimensional space, such as a location or performance. This type of volumography acquires data that can be viewed on flat screens as well as using 3D displays and VR goggles. Consumer-facing formats are numerous and the required motion capture techniques lean on computer graphics, photogrammetry, and other computation-based methods. The viewer generally experiences the result in a real-time engine and has direct input in exploring the generated volume.

On-set virtual production (OSVP), also known as virtual production (VP), or In-Camera Visual Effects (ICVFX), and often called The Volume, is an entertainment technology for television and film production in which LED panels are used as a backdrop for a set, on which video or computer-generated imagery can be displayed in real-time. The use of OSVP became widespread after its use in the first season of The Mandalorian (2019), which used Unreal Engine, developed by Epic Games.

References

  1. Silberman, Steve (May 1, 2003). "MATRIX2". Wired. ISSN   1059-1028 . Retrieved April 2, 2020.
  2. What Dreams May Come, IMDb, retrieved April 2, 2020
  3. "VFXPro – The Daily Visual Effects Resource". March 18, 2004. Archived from the original on March 18, 2004. Retrieved April 2, 2020.
  4. 1 2 3 4 Paul Debevec; Tim Hawkins; Chris Tchou; Haarm-Pieter Diuker; Westley Sarokin; Mark Sagar (2000). "Acquiring the reflectance field of a human face". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. pp. 145–156. doi:10.1145/344779.344855. ISBN   1-58113-208-5. S2CID   2860203.
  5. George Borshukov, Presented at Imagina’04. "Making of The Superpunch" (PDF).
  6. David Fincher – Invisible Details, archived from the original on December 21, 2021, retrieved April 2, 2020
  7. The Amazing Spider-Man 2 – Virtual Cinematography, archived from the original on December 21, 2021, retrieved April 2, 2020
  8. Avengers: Infinity War VFX | Breakdown – Cinematography | Weta Digital , retrieved April 2, 2020
  9. The Lion King – virtual cinematography and VFX, archived from the original on December 21, 2021, retrieved April 2, 2020
  10. Breznican, Anthony (December 9, 2019). "The Irishman, Avengers: Endgame, and the De-aging Technology That Could Change Acting Forever". Vanity Fair. Retrieved April 3, 2020.
  11. "Robert De Niro said no green screen. No face dots. How 'The Irishman's' de-aging changes Hollywood". Los Angeles Times. January 2, 2020. Retrieved April 3, 2020.
  12. Desowitz, Bill (December 6, 2019). "'The Irishman': How Industrial Light & Magic's Innovative De-Aging VFX Rescued Martin Scorsese's Mob Epic". IndieWire. Retrieved April 3, 2020.
  13. 1 2 "Virtual Cinematography: Beyond Big Studio Production". idea.library.drexel.edu. Retrieved April 2, 2020.
  14. "Sir Ian McKellen: Filming The Hobbit made me think I should quit acting". Radio Times . Retrieved April 2, 2020.
  15. 1 2 Roettgers, Janko (May 15, 2019). "How Video-Game Engines Help Create Visual Effects on Movie Sets in Real Time". Variety. Retrieved April 2, 2020.
  16. Leonard Barolli; Fatos Xhafa; Makoto Ikeda, eds. (2016). CISIS 2016 : 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems : proceedings : Fukuoka Institute of Technology (FIT), Fukuoka, Japan, 6–8 July 2016. Los Alamitos, California: IEEE Computer Society. ISBN   9781509009879. OCLC   972631841.
  17. Choi, Wanho; Lee, Taehyung; Kang, Wonchul (2019). "Beyond the Screen". SIGGRAPH Asia 2019 Technical Briefs. Brisbane, Queensland, Australia: ACM Press. pp. 65–66. doi:10.1145/3355088.3365140. ISBN   978-1-4503-6945-9. S2CID   184931978.
  18. Morin, David (February 14, 2019). "Unreal Engine powers ILM's VR virtual production toolset on "Solo: A Star Wars Story"". www.unrealengine.com. Retrieved April 2, 2023.
  19. "How Lucasfilm's New "Stagecraft" Tech Brought 'The Mandalorian' to Life and May Change the Future of TV". /Film. November 20, 2019. Retrieved April 2, 2020.
  20. "'The Lion King's' VR helped make a hit. It could also change movie making". Los Angeles Times. July 26, 2019. Retrieved April 2, 2020.

Further reading