Physically based rendering

Last updated
A diamond plate texture rendered close-up using physically based rendering principles. Microfacet abrasions cover the material, giving it a rough, realistic look even though the material is a metal. Specular highlights are high and realistically modeled at the appropriate edge of the tread using a normal map. Physically Based Rendering Sample 2.png
A diamond plate texture rendered close-up using physically based rendering principles. Microfacet abrasions cover the material, giving it a rough, realistic look even though the material is a metal. Specular highlights are high and realistically modeled at the appropriate edge of the tread using a normal map.

Physically based rendering (PBR) is a computer graphics approach that seeks to render images in a way that models the lights and surfaces with optics in the real world. It is often referred to as "Physically Based Lighting" or "Physically Based Shading". Many PBR pipelines aim to achieve photorealism. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. PBR principles may be implemented in real-time applications using Shaders or offline applications using Ray tracing (graphics) or Path tracing.

Contents

History

Starting in the 1980s, a number of rendering researchers worked on establishing a solid theoretical basis for rendering, including physical correctness. Much of this work was done at the Cornell University Program of Computer Graphics; a 1997 paper from that lab [1] describes the work done at Cornell in this area to that point.

"Physically Based Shading" was introduced by Yoshiharu Gotanda during the course Physically-Based Shading Models in Film and Game Production at the SIGGRAPH 2010. And followed by the course Physically Based Shading in Theory and Practice organised by Stephen Hill and Stephen McAuley between 2012 and 2020.

The phrase "Physically Based Rendering" was more widely popularized by Matt Pharr, Greg Humphreys, and Pat Hanrahan in their book of the same name from 2004, a seminal work in modern computer graphics that won its authors a Technical Achievement Academy Award for special effects. [2] The book is now in its fourth edition. [3]

The first successful, yet partial implementation of physically-based rendering in a video game can be found in the 2013 title Remember Me, that despite being built on a game engine not natively supporting this technology (Unreal Engine 3) was properly modified to accommodate this feature. [4] Despite being a moderate approach to PBR, its accuracy has been further refined with posterior titles such as Ryse: Son of Rome and Killzone Shadow Fall, released on the same year, until the current state of PBR advancements in the 2020s. [5] [6]

Process

Bricks rendered using PBR. Even though this is a rough, opaque surface, more than just diffuse light is reflected from the brighter side of the material, creating small highlights, because "everything is shiny" in the physically-based rendering model of the real world. Tessellation is used to generate an object mesh from a heightmap and normal map, creating greater detail. Physically Based Rendering Sample 1.png
Bricks rendered using PBR. Even though this is a rough, opaque surface, more than just diffuse light is reflected from the brighter side of the material, creating small highlights, because "everything is shiny" in the physically-based rendering model of the real world. Tessellation is used to generate an object mesh from a heightmap and normal map, creating greater detail.

PBR is, as Joe Wilson puts it, "more of a concept than a strict set of rules" [4] – but the concept contains several distinctive points of note. One of these is that – unlike many previous models that sought to differentiate surfaces between non-reflective and reflective – PBR recognizes that, in the real world, as John Hable puts it, "everything is shiny". [7] Even "flat" or "matte" surfaces in the real world such as concrete will reflect a small degree of light, and many metals and liquids will reflect a great deal of it. Another thing that PBR models attempt to do is to integrate photogrammetry - measurements from photographs of real-world materials - to study and replicate real physical ranges of values to accurately simulate albedo, gloss, reflectivity, and other physical properties. Finally, PBR puts a great deal of emphasis on microfacets, and will often contain additional textures and mathematical models intended to model small-scale specular highlights and cavities resulting from smoothness or roughness in addition to traditional specular or reflectivity maps.

Surfaces

PBR often utilize Bidirectional scattering distribution functions to calculate the visible light reflected at a given point on surfaces. Common techniques use approximations and simplified models that try to fit approximate models to more accurate data from other more time consuming methods or laboratory measurements (such as those of a gonioreflectometer).

As described by researcher Jeff Russell of Marmoset, a surface-focused physically based rendering pipeline may also focus on the following areas of research: [6]

Volumes

PBR is also often extended into volume renderings, with areas of research like:

Application

Thanks to high performance and low costs of modern hardware [8] it has become feasible to use PBR not only for industrial but also entertainment purposes wherever photorealistic images are desired, such as video games or movie making. [2] Today's mid to high-end hardware is capable of producing and rendering PBR content and there exists a market of easy-to-use software that allows designers of all experience levels to take advantage of physically based rendering methods, such as:

A typical application provides an intuitive graphical user interface that allows artists to define and layer materials with arbitrary properties and to assign them to a given 2D or 3D object to recreate the appearance of any synthetic or organic material. Environments can be defined with procedural shaders or textures as well as procedural geometry or meshes or point clouds. [5] If possible all changes are made visible in real-time and therefore allow for quick iterations. Sophisticated applications allow savvy users to write custom shaders in a shading language such as HLSL or GLSL, though increasingly node-based material editors that allow a graph-based workflow with native support for important concepts such as light position, levels of reflection and emission and metallicity, and a wide range of other math and optics functions are replacing hand-written shaders for all but the most complex applications.

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

The RenderMan Interface Specification, or RISpec in short, is an open API developed by Pixar Animation Studios to describe three-dimensional scenes and turn them into digital photorealistic images. It includes the RenderMan Shading Language.

The Phong reflection model is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.

<span class="mw-page-title-main">Phong shading</span> Interpolation technique for surface shading

In 3D computer graphics, Phong shading, Phong interpolation, or normal-vector interpolation shading is an interpolation technique for surface shading invented by computer graphics pioneer Bui Tuong Phong. Phong shading interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Diffuse reflection</span> Reflection with light scattered at random angles

Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. An ideal diffuse reflecting surface is said to exhibit Lambertian reflection, meaning that there is equal luminance when viewed from all directions lying in the half-space adjacent to the surface.

A shading language is a graphics programming language adapted to programming shader effects. Shading languages usually consist of special data types like "vector", "matrix", "color" and "normal".

<span class="mw-page-title-main">Subsurface scattering</span>

Subsurface scattering (SSS), also known as subsurface light transport (SSLT), is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface.

<span class="mw-page-title-main">Bidirectional reflectance distribution function</span> Function of four real variables that defines how light is reflected at an opaque surface

The bidirectional reflectance distribution function (BRDF), symbol , is a function of four real variables that defines how light is reflected at an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms. The function takes an incoming light direction, , and outgoing direction, , and returns the ratio of reflected radiance exiting along to the irradiance incident on the surface from direction . Each direction is itself parameterized by azimuth angle and zenith angle , therefore the BRDF as a whole is a function of 4 variables. The BRDF has units sr−1, with steradians (sr) being a unit of solid angle.

<span class="mw-page-title-main">Path tracing</span> Computer graphics method

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

<span class="mw-page-title-main">Pat Hanrahan</span> American computer graphics researcher

Patrick M. Hanrahan is an American computer graphics researcher, the Canon USA Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization. He has received numerous awards, including the 2019 Turing Award.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Reflection (computer graphics)</span> Simulation of reflective surfaces

Reflection in computer graphics is used to render reflective objects like mirrors and shiny surfaces.

<span class="mw-page-title-main">Computer graphics (computer science)</span> Sub-field of computer science

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

Panta Rhei is a video game engine developed by Capcom, for use with 8th generation consoles PlayStation 4, Xbox One; as a replacement for its previous MT Framework engine.

Matt Pharr is an American computer graphics researcher and writer, and one of the primary originators of the physically based rendering process. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization.

This is a glossary of terms relating to computer graphics.

References

  1. Greenberg, Donald P. (1 August 1999). "A framework for realistic image synthesis" (PDF). Communications of the ACM. 42 (8): 44–53. doi:10.1145/310930.310970. Archived (PDF) from the original on 24 September 2018. Retrieved 27 November 2017.
  2. 1 2 Pharr, Matt; Humphreys, Greg; Hanrahan, Pat (2004). Physically Based Rendering: From Theory to Implementation (1st ed.). Morgan Kaufmann. ISBN   9780080538969.
  3. Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (2023). Physically Based Rendering: From Theory to Implementation (4th ed.). The MIT Press. ISBN   9780262048026.
  4. 1 2 Wilson, Joe. "Physically Based Rendering – And You Can Too!" Retrieved on 12 Jan 2017.
  5. 1 2 "Point Clouds". Sketchfab Help Center. Retrieved 2018-05-29.
  6. 1 2 Russell, Jeff, "PBR Theory". Retrieved on 20 August 2019.
  7. Hable, John . "Everything Is Shiny" Archived 2016-12-05 at the Wayback Machine . Retrieved on 14 November 2016.
  8. Kam, Ken. "How Moore's Law Now Favors Nvidia Over Intel". Forbes. Retrieved 2018-05-29.