Skybox (video games)

Last updated
Example of a texture that can be mapped to the faces of a cubic skybox, with faces labelled Skybox example.png
Example of a texture that can be mapped to the faces of a cubic skybox, with faces labelled
Example of a texture for a hemispherical skydome Skybox sky.png
Example of a texture for a hemispherical skydome

A skybox is a method of creating backgrounds to make a video game level appear larger than it really is. [1] When a skybox is used, the level is enclosed in a cuboid. The sky, distant mountains, distant buildings, and other unreachable objects are projected onto the cube's faces (using a technique called cube mapping), thus creating the illusion of distant three-dimensional surroundings. A skydome employs the same concept but uses either a sphere or a hemisphere instead of a cube.

Contents

Processing of 3D graphics is computationally expensive, especially in real-time games, and poses multiple limits. Levels have to be processed at tremendous speeds, making it difficult to render vast skyscapes in real-time. Additionally, real-time graphics generally have depth buffers with limited bit-depth, which puts a limit on the amount of details that can be rendered at a distance.

To avoid these problems, games often employ skyboxes. Traditionally, these are simple cubes with up to six different textures placed on the faces. By careful alignment, a viewer in the exact middle of the skybox will perceive the illusion of a real 3D world around it, made up of those six faces.

As a viewer moves through a 3D scene, it is common for the skybox to remain stationary with respect to the viewer. This technique creates the illusion that objects in the skybox are infinitely far away, since they do not exhibit any parallax motion, whereas 3D objects closer to the viewer do appear to move. This is often a good approximation of reality, where distant objects such as clouds, stars and even mountains appear to be stationary when the viewpoint is displaced by relatively small distances. However, designers must be careful about which objects they include in a fixed skybox. If an object of known size (e.g. a car) is included in the texture, and is large enough for the viewer to perceive it as close by, the lack of parallax motion may be perceived as unrealistic or confusing.

The source of a skybox can be any form of texture, including photographs, hand-drawn images, or pre-rendered 3D geometry. Usually, these textures are created and aligned in 6 directions, with viewing angles of 90 degrees (which covers up the 6 faces of the cube).

Advanced skyboxes

Simple texture-based skyboxes had severe disadvantages. They could not be animated, and all objects would appear equally distant at infinity. They looked simple and because of certain limits, it was hard for designers to be creative with this feature. But starting in the late 1990s, some game designers built small amounts of 3D geometry to appear in the skybox to create a better illusion of depth, in addition to a traditional skybox for objects very far away. This constructed skybox was placed in an unreachable location, typically outside the bounds of the playable portion of the level, to prevent players from touching the skybox.

In older versions of this technology, such as the ones presented in the game Unreal , this was limited to movements in the sky, such as the movements of clouds. Elements could be changed from level to level, such as the positions of stellar objects, or the color of the sky, giving the illusion of the gradual change from day to night. The skybox in this game would still appear to be infinitely far away, as the skybox, although containing 3D geometry, did not move the viewing point along with the player movement through the level.

Newer engines, such as the Source engine, continue on this idea, allowing the skybox to move along with the player, although at a different speed. Because depth is perceived on the compared movement of objects, making the skybox move slower than the level causes the skybox to appear far away, but not infinitely so. It is also possible, but not required, to include 3D geometry which will surround the accessible playing environment, such as unreachable buildings or mountains. They are designed and modeled at a smaller scale, typically 1/16th, then rendered by the engine to appear much larger. This results in fewer CPU requirements than if they were rendered in full size. The effect is referred to as a "3D skybox".

In the game Half-Life 2 , this effect was extensively used in showing The Citadel, a huge structure in the center of City 17. In the closing chapters of the game, the player travels through the city towards the Citadel, the skybox effect making it grow larger and larger progressively with the player movement, completely appearing to be a part of the level. As the player reaches the base of the Citadel, it is broken into two pieces. A small lower section is a part of the main map, while the upper section is in the 3D skybox. The two sections are seamlessly blended together to appear as a single structure.

See also

Related Research Articles

<span class="mw-page-title-main">Forced perspective</span> Optical illusion

Forced perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera. It has uses in photography, filmmaking and architecture.

<span class="mw-page-title-main">Stereoscopy</span> Technique for creating or enhancing the illusion of depth in an image

Stereoscopy is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos) 'firm, solid', and σκοπέω (skopeō) 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.

<span class="mw-page-title-main">Depth perception</span> Visual ability to perceive the world in 3D

Depth perception is the ability to perceive distance to objects in the world using the visual system and visual perception. It is a major factor in perceiving the world in three dimensions. Depth perception happens primarily due to stereopsis and accommodation of the eye.

<span class="mw-page-title-main">Autostereogram</span> Visual illusion of 3D scene achieved by unfocusing eyes when viewing specific 2D images

An autostereogram is a two-dimensional (2D) image that can create the optical illusion of a three-dimensional (3D) scene. Autostereograms use only one image to accomplish the effect while normal stereograms require two. The 3D scene in an autostereogram is often unrecognizable until it is viewed properly, unlike typical stereograms. Viewing any kind of stereogram properly may cause the viewer to experience vergence-accommodation conflict.

2.5D perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little or no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment.

A first-person shooter engine is a video game engine specialized for simulating 3D environments for use in a first-person shooter video game. First-person refers to the view where the players see the world from the eyes of their characters. Shooter refers to games which revolve primarily around wielding firearms and killing other entities in the game world, either non-player characters or other players.

In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

<i>Quake</i> engine Video game engine developed by id Software

The Quake engine is the game engine developed by id Software to power their 1996 video game Quake. It featured true 3D real-time rendering. Since 2012, it has been licensed under the terms of GNU General Public License v2.0 or later.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

<span class="mw-page-title-main">Reflection mapping</span>

In computer graphics, reflection mapping or environment mapping is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.

<span class="mw-page-title-main">Composition (visual arts)</span> Placement or arrangement of visual elements or ingredients in a work of art

The term composition means "putting together". It can be thought of as the organization of the elements of art according to the principles of art. Composition can apply to any work of art, from music through writing and into photography, that is arranged using conscious thought.

<span class="mw-page-title-main">Draw distance</span> Distance of objects drawn by a rendering engine

In computer graphics, draw distance is the maximum distance of objects in a three-dimensional scene that are drawn by the rendering engine. Polygons that lie beyond the draw distance will not be drawn to the screen.

<span class="mw-page-title-main">Cube mapping</span> Method of environment mapping in computer graphics

In computer graphics, cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture.

Phantograms, also known as Phantaglyphs, Op-Ups, free-standing anaglyphs, levitated images, and book anaglyphs, are a form of optical illusion. Phantograms use perspectival anamorphosis to produce a 2D image that is distorted in a particular way so as to appear, to a viewer at a particular vantage point, three-dimensional, standing above or recessed into a flat surface. The illusion of depth and perspective is heightened by stereoscopy techniques; a combination of two images, most typically but not necessarily an anaglyph. With common (red–cyan) 3D glasses, the viewer's vision is segregated so that each eye sees a different image.

A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.

<span class="mw-page-title-main">Displacement mapping</span> Computer graphics technique

Displacement mapping is an alternative computer graphics technique in contrast to bump, normal, and parallax mapping, using a texture or height map to cause an effect where the actual geometric position of points over the textured surface are displaced, often along the local surface normal, according to the value the texture function evaluates to at each point on the surface. It gives surfaces a great sense of depth and detail, permitting in particular self-occlusion, self-shadowing and silhouettes; on the other hand, it is the most costly of this class of techniques owing to the large amount of additional geometry.

2D to 3D video conversion is the process of transforming 2D ("flat") film to 3D form, which in almost all cases is stereo, so it is the process of creating imagery for each eye from one 2D image.

<span class="mw-page-title-main">Stereo photography techniques</span>

Stereo photography techniques are methods to produce stereoscopic images, videos and films. This is done with a variety of equipment including special built stereo cameras, single cameras with or without special attachments, and paired cameras. This involves traditional film cameras as well as, tape and modern digital cameras. A number of specialized techniques are employed to produce different kinds of stereo images.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Art of Illusion</span>

Art of Illusion is a free software, and open source software package for making 3D graphics.

References

  1. "Skybox Basics". Valve Developer Community. Valve. 2015-08-22. Retrieved 2016-10-28.