Subsurface scattering

Last updated
Real-world subsurface scattering of light in a photograph of a human hand Skin Subsurface Scattering.jpg
Real-world subsurface scattering of light in a photograph of a human hand
Computer-generated subsurface scattering in Blender Subsurface scattering.png
Computer-generated subsurface scattering in Blender

Subsurface scattering (SSS), also known as subsurface light transport (SSLT), [1] is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface.

Contents

Subsurface scattering is important for realistic 3D computer graphics, being necessary for the rendering of materials such as marble, skin, leaves, wax and milk. If subsurface scattering is not implemented, the material may look unnatural, like plastic or metal.

Rendering techniques

Direct surface scattering (left) plus subsurface scattering (middle) creates the final image on the right. ShellOpticalDescattering.png
Direct surface scattering (left) plus subsurface scattering (middle) creates the final image on the right.

To improve rendering efficiency, many real-time computer graphics algorithms only compute the reflectance at the *surface* of an object. In reality, many materials are slightly translucent: light enters the surface; is absorbed, scattered and re-emitted  potentially at a different point. Skin is a good case in point; only about 6% of reflectance is direct, 94% is from subsurface scattering. [2] An inherent property of semitransparent materials is absorption. The further through the material light travels, the greater the proportion absorbed. To simulate this effect, a measure of the distance the light has traveled through the material must be obtained.

Depth Map based SSS

Depth estimation using depth maps Sub-surface scattering depth map.svg
Depth estimation using depth maps

One method of estimating this distance is to use depth maps, [3] in a manner similar to shadow mapping. The scene is rendered from the light's point of view into a depth map, so that the distance to the nearest surface is stored. The depth map is then projected onto it using standard projective texture mapping and the scene re-rendered. In this pass, when shading a given point, the distance from the light at the point the ray entered the surface can be obtained by a simple texture lookup. By subtracting this value from the point the ray exited the object we can gather an estimate of the distance the light has traveled through the object.[ citation needed ]

The measure of distance obtained by this method can be used in several ways. One such way is to use it to index directly into an artist created 1D texture that falls off exponentially with distance. This approach, combined with other more traditional lighting models, allows the creation of different materials such as marble, jade and wax.[ citation needed ]

Potentially, problems can arise if models are not convex, but depth peeling [4] can be used to avoid the issue. Similarly, depth peeling can be used to account for varying densities beneath the surface, such as bone or muscle, to give a more accurate scattering model.

As can be seen in the image of the wax head to the right, light isn't diffused when passing through object using this technique; back features are clearly shown. One solution to this is to take multiple samples at different points on surface of the depth map. Alternatively, a different approach to approximation can be used, known as texture-space diffusion.[ citation needed ]

Texture space diffusion

As noted at the start of the section, one of the more obvious effects of subsurface scattering is a general blurring of the diffuse lighting. Rather than arbitrarily modifying the diffuse function, diffusion can be more accurately modeled by simulating it in texture space. This technique was pioneered in rendering faces in The Matrix Reloaded , [5] but is also used in the realm of real-time rendering techniques.

The method unwraps the mesh of an object using a vertex shader, first calculating the lighting based on the original vertex coordinates. The vertices are then remapped using the UV texture coordinates as the screen position of the vertex, suitable transformed from the [0, 1] range of texture coordinates to the [-1, 1] range of normalized device coordinates. By lighting the unwrapped mesh in this manner, we obtain a 2D image representing the lighting on the object, which can then be processed and reapplied to the model as a light map. To simulate diffusion, the light map texture can simply be blurred. Rendering the lighting to a lower-resolution texture in itself provides a certain amount of blurring. The amount of blurring required to accurately model subsurface scattering in skin is still under active research, but performing only a single blur poorly models the true effects. [6] To emulate the wavelength dependent nature of diffusion, the samples used during the (Gaussian) blur can be weighted by channel. This is somewhat of an artistic process. For human skin, the broadest scattering is in red, then green, and blue has very little scattering.[ citation needed ]

A major benefit of this method is its independence of screen resolution; shading is performed only once per texel in the texture map, rather than for every pixel on the object. An obvious requirement is thus that the object have a good UV mapping, in that each point on the texture must map to only one point of the object. Additionally, the use of texture space diffusion provides one of the several factors that contribute to soft shadows, alleviating one cause of the realism deficiency of shadow mapping.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Normal mapping</span> Texture mapping technique

In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.

<span class="mw-page-title-main">Diffuse reflection</span> Reflection with light scattered at random angles

Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. An ideal diffuse reflecting surface is said to exhibit Lambertian reflection, meaning that there is equal luminance when viewed from all directions lying in the half-space adjacent to the surface.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

<span class="mw-page-title-main">Lightmap</span> Data structure used in lightmapping

A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.

<span class="mw-page-title-main">Reflection mapping</span>

In computer graphics, environment mapping, or reflection mapping, is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.

<span class="mw-page-title-main">Path tracing</span> Computer graphics method

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

<span class="mw-page-title-main">Cube mapping</span> Method of environment mapping in computer graphics

In computer graphics, cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map is generated by first rendering the scene six times from a viewpoint, with the views defined by a 90 degree view frustum representing each cube face.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

<span class="mw-page-title-main">Shadow mapping</span> Method to draw shadows in computer graphic images

Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces." Since then, it has been used both in pre-rendered and realtime scenes in many console and PC games.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Vertex (computer graphics)</span>

A vertex in computer graphics is a data structure that describes certain attributes, like the position of a point in 2D or 3D space, or multiple points on a surface.

PICA200 is a graphics processing unit (GPU) designed by Digital Media Professionals Inc. (DMP), a Japanese GPU design startup company, for use in embedded devices such as vehicle systems, mobile phones, cameras, and game consoles. The PICA200 is an IP Core which can be licensed to other companies to incorporate into their SOCs. It was most notably licensed for use in the Nintendo 3DS.

<span class="mw-page-title-main">Depth map</span> Image also containing data on distances of objects from the camera

In 3D computer graphics and computer vision, a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related to depth buffer, Z-buffer, Z-buffering, and Z-depth. The "Z" in these latter terms relates to a convention that the central axis of view of a camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Physically based rendering</span> Computer graphics technique

Physically based rendering (PBR) is a computer graphics approach that seeks to render images in a way that models the lights and surfaces with optics in the real world. It is often referred to as "Physically Based Lighting" or "Physically Based Shading". Many PBR pipelines aim to achieve photorealism. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. PBR principles may be implemented in real-time applications using Shaders or offline applications using Ray tracing (graphics) or Path tracing.

References

  1. "Finish: Subsurface Light Transport". POV-Ray wiki . August 8, 2012.
  2. Krishnaswamy, A; Baronoski, GVG (2004). "A Biophysically-based Spectral Model of Light Interaction with Human Skin" (PDF). Computer Graphics Forum. Blackwell Publishing. 23 (3): 331. doi:10.1111/j.1467-8659.2004.00764.x. S2CID   5746906.
  3. Green, Simon (2004). "Real-time Approximations to Subsurface Scattering". GPU Gems. Addison-Wesley Professional: 263–278.
  4. Nagy, Z; Klein, R (2003). Depth-Peeling for Texture-based Volume Rendering (PDF). 11th Pacific Conference on Computer Graphics and Applications. pp. 429–433. doi:10.1109/PCCGA.2003.1238289. ISBN   0-7695-2028-6.
  5. Borshukov, G; Lewis, J. P. (2005). "Realistic human face rendering for "The Matrix Reloaded"" (PDF). Computer Graphics. ACM Press.
  6. d’Eon, E (2007). "Advanced Skin Rendering" (PDF). GDC 2007.