Terrain rendering

Last updated
upright=1.3STL 3D model of Penang Island terrain based on ASTER Global DEM data Penang island.stl
upright=1.3STL 3D model of Penang Island terrain based on ASTER Global DEM data

Terrain rendering covers a variety of methods of depicting real-world or imaginary world surfaces. Most common terrain rendering is the depiction of Earth's surface.

Contents

It is used in various applications to give an observer a frame of reference. It is also often used in combination with rendering of non-terrain objects, such as trees, buildings, rivers, etc.

There are two major modes of terrain rendering: top-down and perspective rendering. Top-down terrain rendering has been known for centuries in the way of cartographic maps. Perspective terrain rendering has also been known for quite some time. However, only with the advent of computers and computer graphics perspective rendering has become mainstream.

Perspective terrain rendering is described in this article.

Structure

A landscape rendered in Outerra Outerra (PC) screenshots (35277781514).jpg
A landscape rendered in Outerra

A typical terrain rendering application consists of a terrain database, a central processing unit (CPU), a dedicated graphics processing unit (GPU), and a display. A software application is configured to start at initial location in the world space. The output of the application is screen space representation of the real world on a display. The software application uses the CPU to identify and load terrain data corresponding to initial location from the terrain database, then applies the required transformations to build a mesh of points that can be rendered by the GPU, which completes geometrical transformations, creating screen space objects (such as polygons) that create a picture closely resembling the location of the real world.

Texture

There are a number of ways to texture the terrain surface. Some applications benefit from using artificial textures, such as elevation coloring, checkerboard, or other generic textures. Some applications attempt to recreate the real-world surface to the best possible representation using aerial photography and satellite imagery.

In video games, texture splatting is used to texture the terrain surface.

Generation

There are a great variety of methods to generate terrain surfaces. The main problem solved by all these methods is managing number of processed and rendered polygons. It is possible to create a very detailed picture of the world using billions of data points. However such applications are limited to static pictures. Most uses of terrain rendering are moving images, which require the software application to make decisions on how to simplify (by discarding or approximating) source terrain data. Virtually all terrain rendering applications use level of detail to manage number of data points processed by CPU and GPU. There are several modern algorithms for terrain surfaces generating. [1] [2] [3] [4]

Applications

Terrain rendering is widely used in computer games to represent both Earth's surface and imaginary worlds. Some games also have terrain deformation (or deformable terrain).

One important application of terrain rendering is in synthetic vision systems. Pilots flying aircraft benefit greatly from the ability to see terrain surface at all times regardless of conditions outside the aircraft.

See also

Related Research Articles

Rendering (computer graphics) Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

In digital signal processing, spatial anti-aliasing is a technique for minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications.

Texture mapping

Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. The original technique was pioneered by Edwin Catmull in 1974.

Scientific visualization interdisciplinary branch of science concerned with presenting scientific data visually

Scientific visualization is an interdisciplinary branch of science concerned with the visualization of scientific phenomena. It is also considered a subset of computer graphics, a branch of computer science. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data.

Volume rendering 3D rendering techniques

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

Polygon mesh Set of vertices, edges and faces that define the shape of a 3D model

In 3D computer graphics and solid modeling, a polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object. The faces usually consist of triangles, quadrilaterals (quads), or other simple convex polygons (n-gons), since this simplifies rendering, but may also be more generally composed of concave polygons, or even polygons with holes.

Shader Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a type of computer program originally used for shading in 3D scenes. They now perform a variety of specialized functions in various fields within the category of computer graphics special effects, or else do video post-processing unrelated to shading, or even perform functions unrelated to graphics.

In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Heightmap

In computer graphics, a heightmap or heightfield is a raster image used mainly as Discrete Global Grid in secondary elevation modeling. Each pixel stores values, such as surface elevation data, for display in 3D computer graphics. A heightmap can be used in bump mapping to calculate where this 3D data would create shadow in a material, in displacement mapping to displace the actual geometric position of points over the textured surface, or for terrain where the heightmap is converted into a 3D mesh.

3D rendering Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

3D computer graphics Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3DCG or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. The resulting images may be stored for viewing later or displayed in real time. Unlike 3D film and similar techniques, the result is two-dimensional, without the illusion of being solid.

Computer graphics Graphics created using computers

Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

Grome

Grome is an environmental modeling package developed by Quad Software dedicated for procedural and manual generation of large virtual outdoor worlds suitable for games and other 3D real-time simulation applications.

3D modeling

In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of any surface of an object in three dimensions via specialized software. The product is called a 3D model. Someone who works with 3D models may be referred to as a 3D artist or a 3D modeler. A 3D Model can also be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The 3D model can be physically created using 3D printing devices that form 2D layers of the model with three-dimensional material, one layer at a time. In terms of game development, 3D modeling is merely a stage in the entire development process. Simply put, the source of the geometry for the shape of an object can be 1. A designer, industrial engineer or artist using a 3D-CAD system, 2. An existing object, reverse engineered or copied using a 3-D shape digitizer or scanner or 3. Mathematical data stored in memory based on an numerical description or calculation the object.

Tessellation (computer graphics)

In computer graphics, tessellation is used to manage datasets of polygons presenting objects in a scene and divide them into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11.

This is a glossary of terms relating to computer graphics.

References

  1. Stewart J. (1999), “Fast Horizon Computation at All Points of a Terrain With Visibility and Shading Applications”, IEEE Transactions on visualization and computer graphics 4(1).
  2. Bashkov E., Zori S., Suvorova I. (2000), “Modern Methods of Environment Visual Simulation”, In Simulationstechnik, 14. Symposium in Hamburg SCS, pp. 509-514. Europe BVBA, Ghent, Belgium,
  3. Bashkov E.A., Zori S.A. (2001), “Visual Simulation of an Earth Surface by Fast Horizon Computation Algorithm”, In Simulation und Visualisierung, pp. 203-215. Institut fur Simulation und Graphik, Magdeburg, Deutschland
  4. Ruzinoor Che Mat & Norani Nordin, 'Silhouette Rendering Algorithm Using Vectorisation Technique from Kedah Topography Maps', Proceeding 2nd National Conference on Computer Graphics and Multimedia (CoGRAMM’04), Selangor, December 2004. https://s3.amazonaws.com/academia.edu.documents/30969013/449317633605827_1.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1505553957&Signature=7GA1T7nvGM5BOhLQ0OCELIKVYbY%3D&response-content-disposition=inline%3B%20filename%3D3D_Silhouette_Rendering_Algorithms_using.pdf%5B%5D