Reflection (computer graphics)

Last updated
Ray traced model demonstrating specular reflection Refl sample.jpg
Ray traced model demonstrating specular reflection

Reflection in computer graphics is used to render reflective objects like mirrors and shiny surfaces.

Contents

Accurate reflections are commonly computed using ray tracing whereas approximate reflections can usually be computed faster by using simpler methods such as environment mapping. Reflections on shiny surfaces like wood or tile can add to the photorealistic effects of a 3D rendering.

Approaches to reflection rendering

Comparison of accurate reflections computed with path tracing (left), approximate reflections with environment mapping (middle), and screen space reflections (right) CG reflections comparison.png
Comparison of accurate reflections computed with path tracing (left), approximate reflections with environment mapping (middle), and screen space reflections (right)

For rendering environment reflections there exist many techniques that differ in precision, computational and implementation complexity. Combination of these techniques are also possible.

Image order rendering algorithms based on tracing rays of light, such as ray tracing or path tracing, typically compute accurate reflections on general surfaces, including multiple reflections and self reflections. However these algorithms are generally still too computationally expensive for real time rendering (even though specialized HW exists, such as Nvidia RTX) and require a different rendering approach from typically used rasterization.

Reflections on planar surfaces, such as planar mirrors or water surfaces, can be computed simply and accurately in real time with two pass rendering — one for the viewer, one for the view in the mirror, usually with the help of stencil buffer. [1] Some older video games used a trick to achieve this effect with one pass rendering by putting the whole mirrored scene behind a transparent plane representing the mirror. [2]

Reflections on non-planar (curved) surfaces are more challenging for real time rendering. Main approaches that are used include:

Types of reflection

Polished

- A polished reflection is an undisturbed reflection, like a mirror or chrome surface.

Blurry

- A blurry reflection means that tiny random bumps on the surface of the material causes the reflection to be blurry.

Metallic

- A reflection is metallic if the highlights and reflections retain the color of the reflective object.

Glossy

- This term can be misused: sometimes, it is a setting which is the opposite of blurry (e.g. when "glossiness" has a low value, the reflection is blurry). Sometimes the term is used as a synonym for "blurred reflection". Glossy used in this context means that the reflection is actually blurred.

Polished or mirror reflection

Mirror on wall rendered with 100% reflection Mirror2.jpg
Mirror on wall rendered with 100% reflection

Mirrors are usually almost 100% reflective.

Metallic reflection

The large sphere on the left is blue with its reflection marked as metallic. The large sphere on the right is the same color but does not have the metallic property selected. Metallic balls.jpg
The large sphere on the left is blue with its reflection marked as metallic. The large sphere on the right is the same color but does not have the metallic property selected.

Normal (nonmetallic) objects reflect light and colors in the original color of the object being reflected. Metallic objects reflect lights and colors altered by the color of the metallic object itself.

Blurry reflection

The large sphere on the left has sharpness set to 100%. The sphere on the right has sharpness set to 50% which creates a blurry reflection. Blurry reflection.jpg
The large sphere on the left has sharpness set to 100%. The sphere on the right has sharpness set to 50% which creates a blurry reflection.

Many materials are imperfect reflectors, where the reflections are blurred to various degrees due to surface roughness that scatters the rays of the reflections.

Glossy reflection

The sphere on the left has normal, metallic reflection. The sphere on the right has the same parameters, except that the reflection is marked as "glossy". Glossy-spheres.jpg
The sphere on the left has normal, metallic reflection. The sphere on the right has the same parameters, except that the reflection is marked as "glossy".

Fully glossy reflection, shows highlights from light sources, but does not show a clear reflection from objects.

Examples of reflections

Wet floor reflections

The wet floor effect [6] [ better source needed ] is a graphic effects technique popular in conjunction with Web 2.0 style pages, particularly in logos. The effect can be done manually or created with an auxiliary tool which can be installed to create the effect automatically. Unlike a standard computer reflection (and the Java water effect popular in first-generation web graphics), the wet floor effect involves a gradient and often a slant in the reflection, so that the mirrored image appears to be hovering over or resting on a wet floor.

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Global illumination</span> Group of rendering algorithms used in 3D computer graphics

Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

<span class="mw-page-title-main">Radiosity (computer graphics)</span> Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

<span class="mw-page-title-main">Ray tracing (graphics)</span> Rendering method

In 3-D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.

In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Specular reflection</span> Mirror-like wave reflection

Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Distributed ray tracing, also called distribution ray tracing and stochastic ray tracing, is a refinement of ray tracing that allows for the rendering of "soft" phenomena.

Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations.

<span class="mw-page-title-main">Subsurface scattering</span>

Subsurface scattering (SSS), also known as subsurface light transport (SSLT), is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface potentially at a different point. Light generally penetrates the surface and gets scattered a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface.

<span class="mw-page-title-main">Reflection mapping</span>

In computer graphics, environment mapping, or reflection mapping, is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.

<span class="mw-page-title-main">Path tracing</span> Computer graphics method

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

<span class="mw-page-title-main">Cube mapping</span> Method of environment mapping in computer graphics

In computer graphics, cube mapping is a method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

<span class="mw-page-title-main">Demo effect</span>

The demo effect is a name for computer-based real-time visual effects found in demos created by the demoscene.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Kerkythea</span> Standalone rendering system

Kerkythea is a standalone rendering system that supports raytracing and Metropolis light transport, uses physically accurate materials and lighting, and is distributed as freeware. Currently, the program can be integrated with any software that can export files in obj and 3ds formats, including 3ds Max, Blender, LightWave 3D, SketchUp, Silo and Wings3D.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Physically based rendering</span> Computer graphics technique

Physically based rendering (PBR) is a computer graphics approach that seeks to render images in a way that models the lights and surfaces with optics in the real world. It is often referred to as "Physically Based Lighting" or "Physically Based Shading". Many PBR pipelines aim to achieve photorealism. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. PBR principles may be implemented in real-time applications using Shaders or offline applications using Ray tracing (graphics) or Path tracing.

References

  1. Kligard, Mark (1999). "Improving Shadows and Reflections via the Stencil Buffer". ResearchGate : 7. Retrieved 25 April 2020.
  2. Off Camera Secrets, Metal Gear Solid: Twin Snakes - Boundary Break. 2016-11-28. Event occurs at 4:32. Retrieved 25 April 2020.
  3. Fernando, Randima; Kilgard, Mark (2003). The Cg tutorial. The definitive guide to programmable real-time graphics. Addison-Wesley Professional. ISBN   9780321194961.
  4. Hongtongsak, Kevin. "Dynamic Cubemapping". people.engr.tamu.edu. Retrieved 2024-03-09.
  5. Kasyan, Nickolay; Schulz, Nicolas; Sousa, Tiago (18 August 2011). "Secrets of CryENGINE 3 Graphics Technology" (PDF). Retrieved 27 November 2022.
  6. Nate. "WetFloor". Archived from the original on 2008-05-31.