Precomputed Radiance Transfer

Last updated

Precomputed Radiance Transfer (PRT) is a computer graphics technique used to render a scene in real time with complex light interactions being precomputed to save time. Radiosity methods can be used to determine the diffuse lighting of the scene, however PRT offers a method to dynamically change the lighting environment. [1]

In essence, PRT computes the illumination of a point as a linear combination of incident irradiance. An efficient method must be used to encode this data, such as spherical harmonics.

When spherical harmonics are used to approximate the light transport function, only low-frequency effects can be handled with a reasonable number of parameters. Ren Ng et al. [2] extended this work to handle higher frequency shadows by replacing spherical harmonics with non-linear wavelets.

Teemu Mäki-Patola gives a clear introduction to the topic based on the work of Peter-Pike Sloan et al. [3] At SIGGRAPH 2005, a detailed course on PRT was given. [4]

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Global illumination</span> Group of rendering algorithms used in 3D computer graphics

Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

<span class="mw-page-title-main">Radiosity (computer graphics)</span> Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

<span class="mw-page-title-main">Bump mapping</span> Texturing technique for bumps/wrinkles in computer graphics

Bump mapping is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface, although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Frequency domain</span> Signal representation

In mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time, as in time series. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how the signal is distributed within different frequency bands over a range of frequencies. A complex valued frequency-domain representation consists of both the magnitude and the phase of a set of sinusoids at the frequency components of the signal. Although it is common to refer to the magnitude portion as the frequency response of a signal, the phase portion is required to uniquely define the signal.

A light field is a vector function that describes the amount of light flowing in every direction through every point in a space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.

In computing, D3DX is a high level API library which is written to supplement Microsoft's Direct3D graphics API. The D3DX library was introduced in Direct3D 7, and subsequently was improved in Direct3D 9. It provides classes for common calculations on vectors, matrices and colors, calculating look-at and projection matrices, spline interpolations, and several more complicated tasks, such as compiling or assembling shaders used for 3D graphic programming, compressed skeletal animation storage and matrix stacks. There are several functions that provide complex operations over 3D meshes like tangent-space computation, mesh simplification, precomputed radiance transfer, optimizing for vertex cache friendliness and strip reordering, and generators for 3D text meshes. 2D features include classes for drawing screen-space lines, text and sprite based particle systems. Spatial functions include various intersection routines, conversion from/to barycentric coordinates and bounding box and sphere generators.

<span class="mw-page-title-main">Lightmap</span> Data structure used in lightmapping

A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.

<span class="mw-page-title-main">Reflection mapping</span>

In computer graphics, environment mapping, or reflection mapping, is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.

<span class="mw-page-title-main">High-dynamic-range rendering</span> Rendering of computer graphics scenes by using lighting calculations done in high-dynamic-range

High-dynamic-range rendering, also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in high dynamic range (HDR). This allows preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies and special effects benefit from this as it creates more realistic scenes than with more simplistic lighting models.

<span class="mw-page-title-main">Tone mapping</span> Image processing technique

Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range (HDR) images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Spherical harmonic (SH) lighting is a family of real-time rendering techniques that can produce highly realistic shading and shadowing with comparatively little overhead. All SH lighting techniques involve replacing parts of standard lighting equations with spherical functions that have been projected into frequency space using the spherical harmonics as a basis. To take a simple example, a cube map used for environment mapping might be reduced to just nine SH coefficients if preserving high-frequency detail is not a concern.

Interactive skeleton-driven simulation is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality.

In functional analysis, compactly supported wavelets derived from Legendre polynomials are termed Legendre wavelets or spherical harmonic wavelets. Legendre functions have widespread applications in which spherical coordinate system is appropriate. As with many wavelets there is no nice analytical formula for describing these harmonic spherical wavelets. The low-pass filter associated to Legendre multiresolution analysis is a finite impulse response (FIR) filter.

<span class="mw-page-title-main">Scientific Computing and Imaging Institute</span> Research institute at the University of Utah

The Scientific Computing and Imaging (SCI) Institute is a permanent research institute at the University of Utah that focuses on the development of new scientific computing and visualization techniques, tools, and systems with primary applications to biomedical engineering. The SCI Institute is noted worldwide in the visualization community for contributions by faculty, alumni, and staff. Faculty are associated primarily with the School of Computing, Department of Bioengineering, Department of Mathematics, and Department of Electrical and Computer Engineering, with auxiliary faculty in the Medical School and School of Architecture.

PICA200 is a graphics processing unit (GPU) designed by Digital Media Professionals Inc. (DMP), a Japanese GPU design startup company, for use in embedded devices such as vehicle systems, mobile phones, cameras, and game consoles. The PICA200 is an IP Core which can be licensed to other companies to incorporate into their SOCs. It was most notably licensed for use in the Nintendo 3DS.

<span class="mw-page-title-main">Michael F. Cohen</span> American computer scientist

Michael F. Cohen is an American computer scientist and researcher in computer graphics. He is currently a Senior Fellow at Meta in their Generative AI Group. He was a senior research scientist at Microsoft Research for 21 years until he joined Facebook in 2015. In 1998, he received the ACM SIGGRAPH CG Achievement Award for his work in developing radiosity methods for realistic image synthesis. He was elected a Fellow of the Association for Computing Machinery in 2007 for his "contributions to computer graphics and computer vision." In 2019, he received the ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics for “his groundbreaking work in numerous areas of research—radiosity, motion simulation & editing, light field rendering, matting & compositing, and computational photography”.

A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from sparse two-dimensional images. The NeRF model enables learning of novel view synthesis, scene geometry, and the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. NeRF enables rendering of photorealistic views from novel viewpoints. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.

References

  1. "Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments" (PDF), harvard.edu: Peter-Pike Sloan, Jan Kautz, John Snyder , retrieved 22 September 2019
  2. Ng, Ren; Ramamoorthi, Ravi; Hanrahan, Pat (2003). "All-Frequency Shadows Using Non-Linear Wavelet Lighting Approximation" (PDF). ACM Transactions on Graphics. 22 (3): 376–381. doi:10.1145/882262.882280 . Retrieved 2023-08-23.
  3. Teemu Mäki-Patola (2003-05-05). "Precomputed Radiance Transfer". CiteSeerX   10.1.1.131.6778 .
  4. Jan Kautz; Peter-Pike Sloan; Jaakko Lehtinen. "Precomputed Radiance Transfer: Theory and Practice". SIGGRAPH 2005 Courses. Retrieved 2009-02-25.