This article needs additional citations for verification .(May 2013) |
Three-dimensional (3D) computer graphics |
---|
Fundamentals |
Primary uses |
Related topics |
Global illumination [1] (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).
Theoretically, reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another (as opposed to an object being affected only by a direct source of light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.
Images rendered using global illumination algorithms often appear more photorealistic than those using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry (e.g., radiosity). The stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly.
Radiosity, ray tracing, beam tracing, cone tracing, path tracing, volumetric path tracing, Metropolis light transport, ambient occlusion, photon mapping, signed distance field and image-based lighting are all examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate.
These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design.
Achieving accurate computation of global illumination in real-time remains difficult. [2] In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power.
More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations of the rendering equation. Well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here:
In Light-path notation global lighting the paths of the type L (D | S) corresponds * E.
A full treatment can be found in [3]
Another way to simulate real global illumination is the use of high-dynamic-range images (HDRIs), also known as environment maps, which encircle and illuminate the scene. This process is known as image-based lighting.
Method | Description/Notes |
---|---|
Ray tracing | Several enhanced variants exist for solving problems related to sampling, aliasing, and soft shadows: Distributed ray tracing, cone tracing, and beam tracing. |
Path tracing | Unbiased, variant: Bi-directional path tracing and energy redistribution path tracing [4] |
Photon mapping | Consistent, biased; enhanced variants: Progressive photon mapping, stochastic progressive photon mapping ( [5] ) |
Lightcuts | Enhanced variants: Multidimensional lightcuts and bidirectional lightcuts [6] |
Point based global illumination | Extensively used in movie animations [7] [8] |
Radiosity | Finite element method, very good for precomputations. Improved versions are instant radiosity [9] and bidirectional instant radiosity [10] |
Metropolis light transport | Builds upon bi-directional path tracing, unbiased, and multiplexed [11] |
Spherical harmonic lighting | Encodes global illumination results for real-time rendering of static scenes |
Ambient occlusion | - |
Voxel-based global illumination | Several variants exist, including voxel cone tracing global illumination, [12] sparse voxel octree global illumination, and voxel global illumination (VXGI) [13] |
Light propagation volumes global illumination [14] | Light propagation volumes is a technique to approximately achieve global illumination (GI) in real-time. It uses lattices and spherical harmonics (SH) to represent the spatial and angular distribution of light in the scene. Variant cascaded light propagation volumes. [15] |
Deferred radiance transfer global illumination [16] | |
Deep G-buffer based global illumination [17] | |
Signed Distance Fields Dynamic Diffuse Global Illumination [18] | |
Global Illumination Based on Surfels [19] |
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.
In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.
Metropolis light transport (MLT) is a global illumination application of a variant of the Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes.
In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.
Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.
Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.
A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.
In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.
In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.
Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations.
Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.
In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
Kerkythea is a standalone rendering system that supports raytracing and Metropolis light transport, uses physically accurate materials and lighting, and is distributed as freeware. Currently, the program can be integrated with any software that can export files in obj and 3ds formats, including 3ds Max, Blender, LightWave 3D, SketchUp, Silo and Wings3D.
In 3D computer graphics rendering, a hemicube is one way to represent a 180° view from a surface or point in space.
Unbiased rendering in computer graphics refers to techniques that avoid systematic errors, or biases, in the radiance approximation of an image. This term specifically relates to statistical bias, not subjective bias. Unbiased rendering aims to replicate real-world lighting and shading as accurately as possible without shortcuts. Path tracing and its derivatives are examples of unbiased techniques, whereas traditional ray tracing methods are typically biased.
False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel.
This is a glossary of terms relating to computer graphics.