Lightmap

Last updated


Cube with a simple lightmap (shown on the right). Lightmap Cube Sample.png
Cube with a simple lightmap (shown on the right).

A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.

Contents

History

John Carmack's Quake was the first computer game to use lightmaps to augment rendering. [1] Before lightmaps were invented, realtime applications relied purely on Gouraud shading to interpolate vertex lighting for surfaces. This only allowed low frequency lighting information, and could create clipping artifacts close to the camera without perspective-correct interpolation. Discontinuity meshing was sometimes used especially with radiosity solutions to adaptively improve the resolution of vertex lighting information, however the additional cost in primitive setup for realtime rasterization was generally prohibitive. Quake's software rasterizer used surface caching to apply lighting calculations in texture space once when polygons initially appear within the viewing frustum (effectively creating temporary 'lit' versions of the currently visible textures as the viewer negotiated the scene).

As consumer 3d graphics hardware capable of multitexturing, light-mapping became more popular, and engines began to combine light-maps in real time as a secondary multiply-blend texture layer.

Limitations

Lightmaps are composed of lumels [2] (lumination elements), analogous to texels in texture mapping. Smaller lumels yield a higher resolution lightmap, providing finer lighting detail at the price of reduced performance and increased memory usage. For example, a lightmap scale of 4 lumels per world unit would give a lower quality than a scale of 16 lumels per world unit. Thus, in using the technique, level designers and 3d artists often have to make a compromise between performance and quality; if high resolution lightmaps are used too frequently then the application may consume excessive system resources, negatively affecting performance. Lightmap resolution and scaling may also be limited by the amount of disk storage space, bandwidth/download time, or texture memory available to the application. Some implementations attempt to pack multiple lightmaps together in a process known as atlasing [3] to help circumvent these limitations.

Lightmap resolution and scale are two different things. The resolution is the area, in pixels, available for storing one or more surface's lightmaps. The number of individual surfaces that can fit on a lightmap is determined by the scale. Lower scale values mean higher quality and more space taken on a lightmap. Higher scale values mean lower quality and less space taken. A surface can have a lightmap that has the same area, so a 1:1 ratio, or smaller, so the lightmap is stretched to fit.

Lightmaps in games are usually colored texture maps, or per vertex colors. They are usually flat, without information about the light's direction, whilst some game engines use multiple lightmaps to provide approximate directional information to combine with normal-maps. Lightmaps may also store separate precalculated components of lighting information for semi-dynamic lighting with shaders, such as ambient-occlusion & sunlight shadowing.

Creation

When creating lightmaps, any lighting model may be used, because the lighting is entirely precomputed and real-time performance is not always a necessity. A variety of techniques including ambient occlusion, direct lighting with sampled shadow edges, and full radiosity [4] bounce light solutions are typically used. Modern 3D packages include specific plugins for applying light-map UV-coordinates, atlas-ing multiple surfaces into single texture sheets, and rendering the maps themselves. Alternatively game engine pipelines may include custom lightmap creation tools. An additional consideration is the use of compressed DXT textures which are subject to blocking artifacts – individual surfaces must not collide on 4x4 texel chunks for best results.

In all cases, soft shadows for static geometry are possible if simple occlusion tests (such as basic ray-tracing) are used to determine which lumels are visible to the light. However, the actual softness of the shadows is determined by how the engine interpolates the lumel data across a surface, and can result in a pixelated look if the lumels are too large. See texture filtering.

Lightmaps can also be calculated in real-time [5] for good quality colored lighting effects that are not prone to the defects of Gouraud shading, although shadow creation must still be done using another method such as stencil shadow volumes or shadow mapping, as real-time ray-tracing is still too slow to perform on modern hardware in most 3D engines.

Photon mapping can be used to calculate global illumination for light maps.

Alternatives

Vertex lighting

In vertex lighting, lighting information is computed per vertex and stored in vertex color attributes. The two techniques may be combined, e.g. vertex color values stored for high detail meshes, whilst light maps are only used for coarser geometry.

Discontinuity mapping

In discontinuity mapping, the scene may be further subdivided and clipped along major changes in light and dark to better define shadows.

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Radiosity (computer graphics)</span> Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

A first-person shooter engine is a video game engine specialized for simulating 3D environments for use in a first-person shooter video game. First-person refers to the view where the players see the world from the eyes of their characters. Shooter refers to games which revolve primarily around wielding firearms and killing other entities in the game world, either non-player characters or other players.

<i>Quake</i> engine Video game engine developed by id Software

The Quake engine is the game engine developed by id Software to power their 1996 video game Quake. It featured true 3D real-time rendering and since 2012, licensed under the terms of GNU General Public License v2.0 or later.

<span class="mw-page-title-main">Ambient occlusion</span> Computer graphics shading and rendering technique

In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.

id Tech 4 Video game engine

id Tech 4, popularly known as the Doom 3 engine, is a game engine developed by id Software and first used in the video game Doom 3. The engine was designed by John Carmack, who also created previous game engines, such as those for Doom and Quake, which are widely recognized as significant advances in the field. This OpenGL-based game engine has also been used in Quake 4, Prey, Enemy Territory: Quake Wars, Wolfenstein, and Brink. id Tech 4 is licensed under the terms of the GNU General Public License v3.0 or later.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

<span class="mw-page-title-main">Shadow mapping</span> Method to draw shadows in computer graphic images

Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces." Since then, it has been used both in pre-rendered and realtime scenes in many console and PC games.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

In computer graphics, a texture mapping unit (TMU) is a component in modern graphics processing units (GPUs). They are able to rotate, resize, and distort a bitmap image to be placed onto an arbitrary plane of a given 3D model as a texture, in a process called texture mapping. In modern graphics cards it is implemented as a discrete stage in a graphics pipeline, whereas when first introduced it was implemented as a separate processor, e.g. as seen on the Voodoo2 graphics card.

Self-Shadowing is a computer graphics lighting effect, used in 3D rendering applications such as computer animation and video games. Self-shadowing allows non-static objects in the environment, such as game characters and interactive objects, to cast shadows on themselves and each other. For example, without self-shadowing, if a character puts their right arm over the left, the right arm will not cast a shadow over the left arm. If that same character places a hand over a ball, that hand will cast a shadow over the ball.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

The term post-processing is used in the video/film business for quality-improvement image processing methods used in video playback devices, such as stand-alone DVD-Video players; video playing software; and transcoding software. It is also commonly used in real-time 3D rendering to add additional effects.

False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel.

id Tech 6 is a multiplatform game engine developed by id Software. It is the successor to id Tech 5 and was first used to create the 2016 video game Doom. Internally, the development team also used the codename id Tech 666 to refer to the engine. The PC version of the engine is based on Vulkan API and OpenGL API.

In the field of 3D computer graphics, Multiple Render Targets, or MRT, is a feature of modern graphics processing units (GPUs) that allows the programmable rendering pipeline to render images to multiple render target textures at once. These textures can then be used as inputs to other shaders or as texture maps applied to 3D models. Introduced by OpenGL 2.0 and Direct3D 9, MRT can be invaluable to real-time 3D applications such as video games. Before the advent of MRT, a programmer would have to issue a command to the GPU to draw the 3D scene once for each render target texture, resulting in redundant vertex transformations which, in a real-time program expected to run as fast as possible, can be quite time-consuming. With MRT, a programmer creates a pixel shader that returns an output value for each render target. This pixel shader then renders to all render targets with a single draw command.

This is a glossary of terms relating to computer graphics.

References

  1. Abrash, Michael. "Quake's Lighting Model: Surface Caching". www.bluesnews.com. Retrieved 2015-09-07.
  2. Channa, Keshav (July 21, 2003). "flipcode - Light Mapping - Theory and Implementation". www.flipcode.com. Retrieved 2015-09-07.
  3. "Texture Atlasing Whitepaper" (PDF). nvidia.com. NVIDIA. 2004-07-07. Retrieved 2015-09-07.
  4. Jason Mitchell, Gary McTaggart, Chris Green, Shading in Valve's Source Engine. (PDF) Retrieved Jun. 07, 2019.
  5. November 16, 2003. Dynamic Lightmaps in OpenGL. Joshbeam.com Retrieved Jul. 07, 2014.