Light transport theory

Last updated

Light transport theory deals with the mathematics behind calculating the energy transfers between media that affect visibility. This article is currently specific to light transport in rendering processes such as global illumination and HDRI.

Global illumination Group of rendering algorithms used in 3D computer graphics

Global illumination, or indirect illumination, is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

Contents

Light

Light Transport

The amount of light transported is measured by flux density, that is flux per unit area.

Radiometry

Energy Transfer

Media

Models

Hemisphere

Given a surface S, a hemisphere H can be projected on to S to calculate the amount of incoming and outgoing light . If a point P is selected at random on the surface S, the amount of light incoming and outgoing can be calculated by its projection onto the hemisphere.

Hemicube

The hemicube model works in a similar way that the hemisphere model works, with the exception that a hemicube is projected as opposed to a hemisphere. The similarity is only in concept, the actual calculation done by integration has a different form factor.

Particle

Wave

Equations

Maxwell's Equations

Rendering

Rendering converts a model into an image either by simulating light transport to get physically based photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light).

Non-photorealistic rendering

Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art. In contrast to traditional computer graphics, which has focused on photorealism, NPR is inspired by artistic styles such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of "toon shading", as well as in scientific visualization, architectural illustration and experimental animation. An example of a modern use of this method is that of cel-shaded animation.

See also

In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of light such as spectral rendering.

Radiosity (computer graphics) Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

Ray tracing (graphics) rendering method

In computer graphics, ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where taking a relatively long time to render a frame can be tolerated, such as in still images and film and television visual effects, and more poorly suited for real-time applications such as video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena.

Related Research Articles

Rendering (computer graphics) The process of generating an image from a model

Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of computer programs. Also, the results of displaying such a model can be called a render. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.

Shading depicting depth through varying levels of darkness

Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.

Ray casting

Ray casting is the use of ray–surface intersection tests to solve a variety of problems in computer graphics and computational geometry. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering constructive solid geometry models.

Rendering equation integral equation for radiance in computer graphics

In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.

Ambient occlusion Computer graphics shading and rendering technique

In computer graphics, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and the deeper you go inside the tube, the more occluded the lighting becomes. Ambient occlusion can be seen as an accessibility value that is calculated for each surface point. In scenes with open sky this is done by estimating the amount of visible sky for each point, while in indoor environments only objects within a certain radius are taken into account and the walls are assumed to be the origin of the ambient light. The result is a diffuse, non-directional shading effect that casts no clear shadows but that darkens enclosed and sheltered areas and can affect the rendered image's overall tone. It is often used as a post-processing effect.

Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations.

Path tracing

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

Caustic (optics)

In optics, a caustic or caustic network is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore, in the adjacent image, the caustics can be the patches of light or their bright edges. These shapes often have cusp singularities.

3D rendering

3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic rendering.

Bidirectional scattering distribution function

The definition of the BSDF is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF and BTDF.

Computer graphics lighting refers to the simulation of light in computer graphics. This simulation can either be extremely accurate, as is the case in an application like Radiance which attempts to track the energy flow of light interacting with materials using radiosity computational techniques. Alternatively, the simulation can simply be inspired by light physics, as is the case with non-photorealistic rendering. In both cases, a shading model is used to describe how surfaces respond to light. Between these two extremes, there are many different rendering approaches which can be employed to achieve almost any desired visual result.

Reflection (computer graphics)

Reflection in computer graphics is used to emulate reflective objects like mirrors and shiny surfaces.

3D computer graphics graphics that use a three-dimensional representation of geometric data

3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

Unbiased rendering

In computer graphics, unbiased rendering refers to a rendering technique that does not introduce any systematic error, or bias, into the radiance approximation. Because of this, it is often used to generate the reference image to which other rendering techniques are compared. Mathematically speaking, the expected value of the unbiased estimator will always be the population mean, for any number of observations. Error found in an unbiased rendering will be due to variance, which manifests itself as high-frequency noise in the resultant image. Variance is reduced by and standard deviation by for data points, meaning that four times as many data points are needed to halve the standard deviation of the error. This makes unbiased rendering techniques less attractive for realtime or interactive rate applications. Conversely, an image produced by an unbiased renderer that appears smooth and noiseless is probabilistically correct.

Computer graphics (computer science) sub-field of computer science

Computer graphics is a sub-field of Computer Science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

Volumetric path tracing is a method for rendering images in computer graphics which was first introduced by Lafortune and Willems. This method enhances the rendering of the lighting in a scene by extending the path tracing method with the effect of light scattering. It is used for photorealistic effects of participating media like fire, explosions, smoke, clouds, fog or soft shadows.