Draw distance

Last updated
The influence of different draw distances (Higher distances show more area.)

In computer graphics, draw distance (render distance or view distance) is the maximum distance of objects in a three-dimensional scene that are drawn by the rendering engine. Polygons that lie beyond the draw distance will not be drawn to the screen.

Contents

Draw distance requires definition because a processor having to render objects out to an infinite distance would slow down the application to an unacceptable speed. [1] As the draw distance increases, more distant polygons need to be drawn onto the screen that would regularly be clipped. This requires more computing power; the graphic quality and realism of the scene will increase as draw distance increases, but the overall performance (frames per second) will decrease. Many games and applications will allow users to manually set the draw distance to balance performance and visuals.

Problems in older games

Older games had far shorter draw distances, most noticeable in vast, open scenes. In many cases, once-distant objects or terrain would suddenly appear without warning as the camera got closer to them, an effect known as "pop-up graphics", "pop-in", or "draw in". [1] This is a hallmark of short draw distance, and still affects large, open-ended games like the Grand Theft Auto series and Second Life .[ citation needed ] In newer games, this effect is usually limited to smaller objects such as people or trees, a contrast to older games where huge chunks of terrain could suddenly appear or fade in along with smaller objects. The Sony PlayStation game Formula 1 97 offered a setting so the player could choose between fixed draw distance (with variable frame rate) and fixed frame rate (with variable draw distance).

Alternatives

A common trick used in games to disguise a short draw distance is to obscure the area with a distance fog. Alternative methods have been developed to sidestep the problem altogether using level of detail manipulation. Black & White was one of the earlier games to use adaptive level of detail to decrease the number of polygons in objects as they moved away from the camera, allowing it to have a massive draw distance while maintaining detail in close-up views.

The Legend of Zelda: The Wind Waker uses a variant of the level of detail programming. The game overworld is divided into 49 squares. Each square has an island inside of it; the distances between the island and the borders of the square are considerable. Everything within a square is loaded when entered, including all models used in close-up views and animations. Utilizing the telescope item, one can see just how detailed even far-away areas are. However, textures are not displayed; they are faded in as one gets closer to the square's island. Islands outside of the current square are less detailed—however, these far-away island models do not degenerate any further than that, even though some of these islands can be seen from everywhere else in the overworld. In both indoor and outdoor areas, there is no distance fog; however, there are some areas where "distance" fog is used as an atmospheric effect. As a consequence to the developers' attention to detail, however, some areas of the game have lower frame rates due to the large number of enemies on screen.

Halo 3 is claimed by its creators at Bungie to have a draw distance upwards of 14 miles, which is an example of the vastly improved draw distances made possible by more recent game consoles. In addition, Crysis is said to have a draw distance up to 16 kilometers (9.9 mi), while Cube 2: Sauerbraten has a potentially unlimited draw distance, possibly due to the larger map size. Grand Theft Auto V was praised for its seemingly infinite draw distance despite having a large, detailed map. [2]

See also

Related Research Articles

<span class="mw-page-title-main">Wire-frame model</span> Representation of a 3D object with only its edges rendered

A wire-frame model, also wireframe model, is a visual representation of a three-dimensional (3D) physical object used in 3D computer graphics. It is created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object's constituent vertices using (straight) lines or curves. The object is projected into screen space and rendered by drawing lines at the location of each edge. The term "wire frame" comes from designers using metal wire to represent the three-dimensional shape of solid objects. 3D wire frame computer models allow for the construction and manipulation of solids and solid surfaces. 3D solid modeling efficiently draws higher quality representations of solids than conventional line drawing.

In digital signal processing, spatial anti-aliasing is a technique for minimizing the distortion artifacts (aliasing) when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications.

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Distance fog</span> In 3D graphics, obscuring distant objects with fog

Distance fog is a technique used in 3D computer graphics to enhance the perception of distance by shading distant objects differently.

id Tech 1, also known as the Doom engine, is the game engine used in the id Software video games Doom and Doom II: Hell on Earth. It is also used in Heretic, Hexen: Beyond Heretic, Strife: Quest for the Sigil, Hacx: Twitch 'n Kill, Freedoom, and other games produced by licensees. It was created by John Carmack, with auxiliary functions written by Mike Abrash, John Romero, Dave Taylor, and Paul Radek. Originally developed on NeXT computers, it was ported to MS-DOS and compatible operating systems for Doom's initial release and was later ported to several game consoles and operating systems.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

2.5D perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little or no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment.

<span class="mw-page-title-main">Skybox (video games)</span> Technique in video game aesthetic design

A skybox is a method of creating backgrounds to make a video game level appear larger than it really is. When a skybox is used, the level is enclosed in a cuboid. The sky, distant mountains, distant buildings, and other unreachable objects are projected onto the cube's faces, thus creating the illusion of distant three-dimensional surroundings. A skydome employs the same concept but uses either a sphere or a hemisphere instead of a cube.

<span class="mw-page-title-main">Attribute clash</span>

Attribute clash is a display artifact caused by limits in the graphics circuitry of some colour 8-bit home computers, most notably the ZX Spectrum, where it meant that only two colours could be used in any 8×8 tile of pixels. The effect was also noticeable on MSX software and in some Commodore 64 titles. Workarounds to prevent this limit from becoming apparent have since been considered an element of Spectrum programmer culture.

A first-person shooter engine is a video game engine specialized for simulating 3D environments for use in a first-person shooter video game. First-person refers to the view where the players see the world from the eyes of their characters. Shooter refers to games which revolve primarily around wielding firearms and killing other entities in the game world, either non-player characters or other players.

In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

<span class="mw-page-title-main">Pixelation</span> Computer graphics artifact

In computer graphics, pixelation is caused by displaying a bitmap or a section of a bitmap at such a large size that individual pixels, small single-colored square display elements that comprise the bitmap, are visible. Such an image is said to be pixelated.

<span class="mw-page-title-main">Low poly</span> 3D computer graphics mesh with low number of polygons

Low poly is a polygon mesh in 3D computer graphics that has a relatively small number of polygons. Low poly meshes occur in real-time applications as contrast with high-poly meshes in animated movies and special effects of the same era. The term low poly is used in both a technical and a descriptive sense; the number of polygons in a mesh is an important factor to optimize for performance but can give an undesirable appearance to the resulting graphics.

In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygon meshes. Polygonal modeling is well suited to scanline rendering and is therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D objects include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray tracers.

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

<i>Hellcats over the Pacific</i> 1991 video game

Hellcats over the Pacific is a combat flight simulation game for the Macintosh computer. It was written by Parsoft Interactive and released by Graphic Simulations in 1991. Hellcats was a major release for the Mac platform, one of the first 3D games to be able to drive a 640 x 480 x 8-bit display at reasonable frame rates in an era when the PC clone's VGA at 320 x 240 x 4-bit was the standard. The graphics engine was combined with a simple Mac interface, a set of randomized missions, and a number of technical features that greatly enhanced the game's playability and made it a lasting favorite into the mid-1990s. The original game was followed with a missions disk in 1992, Hellcats: Missions at Leyte Gulf, which greatly increased the visual detail and added many more objects to the game.

A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.

References

  1. 1 2 "The Next Generation 1996 Lexicon A to Z". Next Generation . No. 15. Imagine Media. March 1996. p. 32. See entries on "Depth shading" and "Draw in".
  2. "'GTA 5' PS4 Vs. PS3 Graphics: Luscious Visuals Take Our Breath Away [VIDEO, COMPARISON]". Latin Times . September 15, 2014. Retrieved 2015-11-17.