Virtual camera system

Last updated

Virtual camera system demo showing parameters of the camera that can be adjusted Virtual-camera-system.png
Virtual camera system demo showing parameters of the camera that can be adjusted

In 3D video games, a virtual camera system aims at controlling a camera or a set of cameras to display a view of a 3D virtual world. Camera systems are used in video games where their purpose is to show the action at the best possible angle; more generally, they are used in 3D virtual worlds when a third-person view is required.

Contents

As opposed to filmmakers, virtual camera system creators have to deal with a world that is interactive and unpredictable. It is not possible to know where the player character is going to be in the next few seconds; therefore, it is not possible to plan the shots as a filmmaker would do. To solve this issue, the system relies on certain rules or artificial intelligence to select the most appropriate shots.

There are mainly three types of camera systems. In fixed camera systems, the camera does not move at all, and the system displays the player's character in a succession of still shots. Tracking cameras, on the other hand, follow the character's movements. Finally, interactive camera systems are partially automated and allow the player to directly change the view. To implement camera systems, video game developers use techniques such as constraint solvers, artificial intelligence scripts, or autonomous agents.

Third-person view

In video games, "third-person" refers to a graphical perspective rendered from a fixed distance behind and slightly above the player character. This viewpoint allows players to see a more strongly characterized avatar and is most common in action games and action adventure games. Games with this perspective often make use of positional audio, where the volume of ambient sounds varies depending on the position of the avatar. [1]

There are primarily three types of third-person camera systems: the "fixed camera systems" in which the camera positions are set during the game creation; the "tracking camera systems" in which the camera simply follows the player's character; and the "interactive camera systems" that are under the player's control.

Fixed

Selection of shots in Resident Evil 2 that aim at creating tension Resident-evil-2-camerawork.jpg
Selection of shots in Resident Evil 2 that aim at creating tension

With a fixed camera system, the developers set the properties of the camera, such as its position, orientation or field of view, during the game creation. The camera views will not change dynamically, so the same place will always be shown under the same set of views. Games that use fixed cameras include Grim Fandango (1998) and the early Resident Evil and God of War games. [2]

One advantage of this camera system is that it allows the game designers to use the language of film, creating mood through camerawork and selection of shots. Games that use this kind of technique are often praised for their cinematic qualities. [3] Many games with fixed cameras use tank controls, whereby players control character movement relative to the position of the player character rather than the camera position; [4] this allows the player to maintain direction when the camera angle changes. [5]

Tracking

An illustration of a protagonist whom a player controls and a tracking camera just behind, slightly above, and slightly facing down towards that character Number-person views.png
An illustration of a protagonist whom a player controls and a tracking camera just behind, slightly above, and slightly facing down towards that character

Tracking cameras follows the characters from behind. The player does not control the camera in any way – they cannot for example rotate it or move it to a different position. This type of camera system was very common in early 3D games such as Crash Bandicoot or Tomb Raider since it is very simple to implement. However, there are a number of issues with it. In particular, if the current view is not suitable (either because it is occluded by an object, or because it is not showing what the player is interested in), it cannot be changed since the player does not control the camera. [6] [7] [8] Sometimes this viewpoint causes difficulty when a character turns or stands face out against a wall. The camera may jerk or end up in awkward positions. [1]

Interactive

Instead of staying behind Mario, the camera intelligently rotates to show the path (Super Mario 64). Super-mario-64-camera-system-ai.jpg
Instead of staying behind Mario, the camera intelligently rotates to show the path ( Super Mario 64 ).

This type of camera system is an improvement over the tracking camera system. While the camera is still tracking the character, some of its parameters, such as its orientation or distance to the character, can be changed. On video game consoles, the camera is often controlled by an analog stick to provide good accuracy, whereas on PC games it is usually controlled by the mouse. This is the case in games such as Super Mario Sunshine or The Legend of Zelda: The Wind Waker . Fully interactive camera systems are often difficult to implement in the right way. Thus GameSpot argues that much of the Super Mario Sunshine' difficulty comes from having to control the camera. [9] The Legend of Zelda: The Wind Waker was more successful at it - IGN called the camera system "so smart that it rarely needs manual correction". [10]

One of the first games to offer an interactive camera system was Super Mario 64 . The game had two types of camera systems between which the player could switch at any time. The first one was a standard tracking camera system except that it was partly driven by artificial intelligence. Indeed, the system was "aware" of the structure of the level and therefore could anticipate certain shots. For example, in the first level, when the path to the hill is about to turn left, the camera automatically starts looking towards the left too, thus anticipating the player's movements. The second type allows the player to control the camera relatively to Mario's position. By pressing the left or right buttons, the camera rotates around Mario, while pressing up or down moves the camera closer or away from Mario. [11] [12]

Implementation

There is a large body of research on how to implement a camera system. [13] The role of a constraint solver software is to generate the best possible shot given a set of visual constraints. In other words, the constraint solver is given a requested shot composition such as "show this character and ensure that he covers at least 30 percent of the screen space". The solver will then use various methods to try to create a shot that would satisfy this request. Once a suitable shot is found, the solver outputs the coordinates and rotation of the camera, which can then be used by the graphic engine renderer to display the view. [14]

In some camera systems, if no solution can be found, constraints are relaxed. For example, if the solver cannot generate a shot where the character occupies 30 percent of the screen space, it might ignore the screen space constraint and simply ensure that the character is visible at all. [15] Such methods include zooming out.

Some camera systems use predefined scripts to decide how to select the current shot for commonly seen shot scenarios called film idioms. Typically, the script is going to be triggered as a result of an action. For instance, when the player's character initiates a conversation with another character, the "conversation" script is going to be triggered. This script will contain instructions on how to "shoot" a two-character conversation. Thus the shots will be a combination of, for instance, over the shoulder shots and close-up shots. Such script-based approaches may switch the camera between a set of predefined cameras or rely on a constraint solver to generate the camera coordinates to account for variability in scene layout. This scripted approach and the use of a constraint solver to compute virtual cameras was first proposed by Drucker. [16] Subsequent research demonstrated how a script-based system could automatically switch cameras to view conversations between avatars in a realtime chat application. [17]

Bill Tomlinson used a more original approach to the problem. He devised a system in which the camera is an autonomous agent with its own personality. The style of the shots and their rhythm will be affected by their mood. Thus a happy camera will "cut more frequently, spend more time in close-up shots, move with a bouncy, swooping motion, and brightly illuminate the scene". [18]

While much of the prior work in automated virtual camera control systems has been directed towards reducing the need for a human to manually control the camera, the Director's Lens solution computes and proposes a palette of suggested virtual camera shots leaving the human operator to make the creative shot selection. In computing subsequent suggested virtual camera shots, the system analyzes the visual compositions and editing patterns of prior recorded shots to compute suggested camera shots that conform to continuity conventions such as not crossing the line of action, matching placement of virtual characters so they appear to look at one another across cuts, and favors those shots which the human operator had previously used in sequence. [19]

In mixed-reality applications

In 2010, the Kinect was released by Microsoft as a 3D scanner/webcam hybrid peripheral device which provides full-body detection of Xbox 360 players and hands-free control of the user interfaces of video games and other software on the console. This was later modified by Oliver Kreylos [20] of University of California, Davis in a series of YouTube videos which showed him combining the Kinect with a PC-based virtual camera. [21] Because the Kinect is capable of detecting a full range of depth (through computer stereo vision and Structured light) within a captured scene, Kreylos demonstrated the capacity of the Kinect and the virtual camera to allow free-viewpoint navigation of the range of depth, although the camera could only allow video capture of the scene as shown to the front of the Kinect, resulting in fields of black, empty space where the camera was unable to capture video within the field of depth. Later, Kreylos demonstrated a further elaboration on the modification by combining the video streams of two Kinects in order to further enhance the video capture within the view of the virtual camera. [22] Kreylos' developments using the Kinect were covered among the works of others in the Kinect hacking and homebrew community in a New York Times article. [23]

Real-time recording and motion tracking

Virtual cameras have been developed which allow a director to film motion capture and view the digital character's movements in real time [24] in a pre-constructed digital environment, such as a house or spaceship. [25] Resident Evil 5 was the first video game to use the technology, [26] which was developed for the 2009 film Avatar . [25] [27] The use of motion capture to control the position and orientation of a virtual camera enables the operator to intuitively move and aim the virtual camera by simply walking about and turning the virtual camera rig. A virtual camera rig consists of a portable monitor or tablet device, motion sensors, an optional support framework, and optional joystick or button controls that are commonly used to start or stop recording and adjust lens properties. [28] In 1992, Michael McKenna of MIT's Media Lab demonstrated the earliest documented virtual camera rig when he fixed a Polhemus magnetic motion sensor and a 3.2 inch portable LCD TV to a wooden ruler. [29] The Walkthrough Project at the University of North Carolina at Chapel Hill produced a number of physical input devices for virtual camera view control including dual three-axis joysticks and a billiard-ball shaped prop known as the UNC Eyeball that featured an embedded six-degree of freedom motion tracker and a digital button. [30]

See also

Related Research Articles

<span class="mw-page-title-main">Shigeru Miyamoto</span> Japanese video game designer (born 1952)

Shigeru Miyamoto is a Japanese video game designer, producer and game director at Nintendo, where he serves as one of its representative directors as an executive since 2002. Widely regarded as one of the most accomplished and influential designers in the history of video games, he is the creator of some of the most acclaimed and best-selling game franchises of all time, including Mario,The Legend of Zelda, Donkey Kong, Star Fox and Pikmin.

<i>Paper Mario</i> (video game) 2000 video game

Paper Mario is a role-playing video game developed by Intelligent Systems and published by Nintendo for the Nintendo 64 home video game console. Paper Mario is the first game in the Paper Mario series. First released in Japan in 2000 and then internationally in 2001, Paper Mario was later re-released for Nintendo's Wii Virtual Console in July 2007, the Wii U Virtual Console in April 2015, and the Nintendo Switch Online + Expansion Pack on December 10, 2021.

<i>Mario Tennis</i> 2000 video game

Mario Tennis is a 2000 sports video game developed by Camelot Software Planning and published by Nintendo for the Nintendo 64. Following Mario's Tennis, it is the second game in the Mario Tennis series. The game is known for being the introduction of Luigi's arch-rival, Waluigi, and the re-introduction of Princess Daisy and Birdo.

<i>Super Mario 64</i> 1996 video game

Super Mario 64 is a platform game developed and published by Nintendo for the Nintendo 64. It was released in Japan and North America in 1996 and PAL regions in 1997. It is the first Super Mario game to feature 3D gameplay, combining traditional Super Mario gameplay, visual style, and characters in a large open world. In the game, Bowser, the primary antagonist of the Super Mario franchise, invades Princess Peach's castle and hides the castle's sources of protection, the Power Stars, in many different worlds inside magical paintings. As Mario, the player collects Power Stars to unlock enough of Princess Peach's castle to get to Bowser and rescue Princess Peach.

<span class="mw-page-title-main">Game controller</span> Device used with games or entertainment systems

A game controller, gaming controller, or simply controller, is an input device or input/output device used with video games or entertainment systems to provide input to a video game. Input devices that have been classified as game controllers include keyboards, mouses, gamepads, and joysticks, as well as special purpose devices, such as steering wheels for driving games and light guns for shooting games. Controllers designs have evolved to include directional pads, multiple buttons, analog sticks, joysticks, motion detection, touch screens and a plethora of other features.

<i>The Legend of Zelda: Ocarina of Time</i> 1998 video game

The Legend of Zelda: Ocarina of Time is a 1998 action-adventure game developed and published by Nintendo for the Nintendo 64. It was released in Japan and North America in November 1998 and in PAL regions the following month. Ocarina of Time is the first game in The Legend of Zelda series with 3D graphics.

<i>Banjo-Kazooie</i> (video game) 1998 video game

Banjo-Kazooie is a 1998 platform game developed by Rare and published by Nintendo for the Nintendo 64. Controlling the player characters, the bear Banjo and the bird Kazooie, the player attempts to save Banjo's kidnapped sister Tooty from the witch Gruntilda. The player explores nine nonlinear worlds to gather items and progress. Using Banjo and Kazooie's traversal and combat abilities, they complete challenges such as solving puzzles, jumping over obstacles, and defeating bosses.

<i>Mario Golf: Toadstool Tour</i> 2003 video game

Mario Golf: Toadstool Tour, known in Japan as Mario Golf: Family Tour, is a 2003 sports game developed by Camelot Software Planning and published by Nintendo for the GameCube. It is the sequel to the 1999 Nintendo 64 title Mario Golf, and is the third game in the Mario Golf series. It was released in North America on July 28, 2003, in Japan on September 5, 2003, and in PAL regions in 2004.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

<span class="mw-page-title-main">Avatar (computing)</span> Graphical representation of the user or the users alter ego or character

In computing, an avatar is a graphical representation of a user or the user's character or persona. Avatars can be two-dimensional icons in Internet forums and other online communities, where they are also known as profile pictures, userpics, or formerly picons. Alternatively, an avatar can take the form of a three-dimensional model, as used in online worlds and video games, or an imaginary character with no graphical appearance, as in text-based games or worlds such as MUDs.

<i>Marios Tennis</i> 1995 video game

Mario's Tennis is a 1995 sports game developed by Nintendo for the Virtual Boy video game console. The game was released at the launch of the Virtual Boy, and later as a pack-in game in North America.

<span class="mw-page-title-main">PlayStation Eye</span> Digital camera device for the PlayStation 3

The PlayStation Eye is a digital camera device, similar to a webcam, for the PlayStation 3. The technology uses computer vision and gesture recognition to process images taken by the camera. This allows players to interact with games using motion and color detection as well as sound through its built-in microphone array. It is the successor to the EyeToy for the PlayStation 2, which was released in 2003.

<span class="mw-page-title-main">Kinect</span> Motion-sensing input device for the Xbox 360 and Xbox One

Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.

A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.

<span class="mw-page-title-main">Third-person shooter</span> Video game genre

Third-person shooter (TPS) is a subgenre of 3D shooter games in which the gameplay consists primarily of shooting. It is closely related to first-person shooters, but with the player character visible on-screen during play. While 2D shoot 'em up games also employ a third-person perspective, the TPS genre is distinguished by having the game presented with the player's avatar as a primary focus of the camera's view.

iClone is a real-time 3D animation and rendering software program. Real-time playback is enabled by using a 3D videogame engine for instant on-screen rendering.

<i>Kinect Adventures!</i> 2010 video game

Kinect Adventures! is a sports video game released by Microsoft Game Studios for the Xbox 360. Released in 2010, it is a collection of five adventure and sports minigames and was developed by Good Science Studio, a subsidiary of Microsoft Game Studios. The game utilizes the Kinect motion camera and was offered as a pack-in game with the console. It was unveiled at the 2010 Electronic Entertainment Expo and went on to become the best-selling game on the Xbox 360, selling 24 million units worldwide.

<i>Double Fine Happy Action Theater</i> 2012 video game

Double Fine Happy Action Theater is a casual video game developed by Double Fine Productions and distributed by Microsoft Game Studios. The title is a Kinect motion-sensing based title for the Xbox 360 and was released via the Xbox Live Arcade service in North America on February 1, 2012. Happy Action Theater is based on an idea that Double Fine's founder, Tim Schafer, had on devising a game that Lily, his two-year-old daughter, could play with. To this, the open-ended game is a collection of eighteen different modes that allow multiple players to interact in unique ways through the Kinect cameras and motion-sensing in an augmented reality shown on the console's display, such as playing in a giant virtual ball pit or walking through simulated lava.

Based on Id Software's open stance towards game modifications, their Quake series became a popular subject for player mods beginning with Quake in 1996. Spurred by user-created hacked content on their previous games and the company's desire to encourage the hacker ethic, Id included dedicated modification tools into Quake, including the QuakeC programming language and a level editor. As a game that popularized online first-person shooter multiplayer, early games were team- and strategy-based and led to prominent mods like Team Fortress, whose developers were later hired by Valve to create a dedicated version for the company. Id's openness and modding tools led to a "Quake movie" community, which altered gameplay data to add camera angles in post-production, a practice that became known as machinima.

References

  1. 1 2 Rollings, Andrew; Ernest Adams (2006). Fundamentals of Game Design. Prentice Hall. ISBN   9780131687479.
  2. Casamassina, Matt. "fixed-camera". giantbomb.
  3. Casamassina, Matt. "Resident Evil Review". IGN. Archived from the original on 25 March 2009. Retrieved 22 March 2009.
  4. "A eulogy for tank controls". PC Gamer . 20 February 2015. Retrieved 5 March 2018.
  5. Matulef, Jeffrey (26 January 2015). "Bringing out the Dead: Tim Schafer reflects back on Grim Fandango". Eurogamer. Retrieved 5 March 2018.
  6. "Sonic Adventure Review". IGN. Archived from the original on 11 February 2008. Retrieved 22 March 2009.
  7. "Tomb Raider: The Last Revelation Review". IGN. 11 December 1999. Retrieved 22 March 2009.
  8. Carle, Chris. "Enter the Matrix Review". IGN. Archived from the original on 25 March 2009. Retrieved 22 March 2009.
  9. Gerstmann, Jeff (4 October 2002). "Super Mario Sunshine Review for GameCube". GameSpot. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  10. Casamassina, Matt (25 March 2003). "The Legend of Zelda: The Wind Waker Review". IGN. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  11. "15 Most Influential Video Games of All Time: Super Mario 64". GameSpot. Archived from the original on 26 March 2009. Retrieved 22 March 2009.
  12. "The Essential 50 Part 36: Super Mario 64 from". 1UP.com. Retrieved 22 March 2009.
  13. "Cameracontrol.org: The virtual camera control bibliography" . Retrieved 6 May 2011.
  14. Bares, William; Scott McDermott; Christina Boudreaux; Somying Thainimit (2000). "Virtual 3D camera composition from frame constraints" (PDF). International Multimedia Conference. California, United States: Marina del Rey: 177–186. Archived from the original (PDF) on 10 July 2010. Retrieved 22 March 2009.
  15. Drucker, Steven M.; David Zeltzer (1995). CamDroid: A System for Implementing Intelligent Camera Control (PDF). ISBN   978-0-89791-736-0. Archived from the original (PDF) on 5 June 2011. Retrieved 22 March 2009.{{cite book}}: |journal= ignored (help)
  16. Drucker, Steven M.; David Zeltzer (1995). CamDroid: A System for Implementing Intelligent Camera Control (PDF). ISBN   978-0-89791-736-0. Archived from the original (PDF) on 5 June 2011. Retrieved 15 March 2015.{{cite book}}: |journal= ignored (help)
  17. He, Li-wei; Michael F. Cohen; David H. Salesin (1996). "The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing" (PDF). International Conference on Computer Graphics and Interactive Techniques. New York. 23rd: 217–224. Archived from the original (PDF) on 28 August 2008. Retrieved 22 March 2009.
  18. Tomlinson, Bill; Bruce Blumberg; Delphine Nain (2000). "Expressive autonomous cinematography for interactive virtual environments". Proceedings of the fourth international conference on Autonomous agents (PDF). Vol. 4th. Barcelona, Spain. pp. 317–324. CiteSeerX   10.1.1.19.7502 . doi:10.1145/336595.337513. ISBN   978-1-58113-230-4. S2CID   5532829. Archived (PDF) from the original on 29 March 2005. Retrieved 22 March 2009.{{cite book}}: CS1 maint: date and year (link) CS1 maint: location missing publisher (link)
  19. Lino, Christophe; Marc Christie; Roberto Ranon; William Bares (1 December 2011). "The director's lens". Proceedings of the 19th ACM international conference on Multimedia. ACM. pp. 323–332. doi:10.1145/2072298.2072341. ISBN   9781450306164. S2CID   14079689.
  20. "Oliver Krelos' Homepage".
  21. Kevin Parrish (17 November 2010). "Kinect Used As 3D Video Capture Tool". Tom's Hardware.
  22. Tim Stevens (29 November 2010). "Two Kinects join forces to create better 3D video, blow our minds (video)". Engadget.
  23. Jenna Wortham (21 November 2010). "With Kinect Controller, Hackers Take Liberties". The New York Times .
  24. Hsu, Jeremy (27 February 2009). ""Virtual Camera" Captures Actors' Movements for Resident Evil 5". Popular Science . Archived from the original on 2 March 2009.
  25. 1 2 Lewinski, John Scott (27 February 2009). "Resident Evil 5 Offers Sneak Peek at Avatar's 'Virtual Camera'". Wired . Retrieved 25 February 2015.
  26. Lowe, Scott (27 February 2009). "The Tech Behind RE5". IGN . Retrieved 24 February 2015.
  27. Thompson, Anne (1 January 2010). "How James Cameron's Innovative New 3D Tech Created Avatar". Popular Mechanics . Retrieved 25 February 2015.
  28. "Optitrack InsightVCS" . Retrieved 15 March 2015.
  29. Michael McKenna (March 1992). "Interactive viewpoint control and three-dimensional operations". Proceedings of the 1992 symposium on Interactive 3D graphics - SI3D '92. ACM. pp. 53–56. CiteSeerX   10.1.1.132.8599 . doi:10.1145/147156.147163. ISBN   978-0897914673. S2CID   17308648.
  30. Frederick Brooks Jr. (June 1992). "Final Technical Report – Walkthrough Project" (PDF). Tr92-026. University of North Carolina at Chapel Hill. Archived (PDF) from the original on 23 September 2015. Retrieved 23 March 2015.