Shading

Last updated
Flatshading01.png
Flat shading describes a number of simple lighting techniques. In this case, the lighting value is determined once for each face. The color value can also be determined per object or per vertex.
Gouraudshading01.png
Gouraud shading (1971) improved the appearance of curved objects.
Phongshading01.png
Phong shading interpolation is a more realistic shading technique developed by Bui Tuong Phong in 1973.

Shading refers to the depiction of depth perception in 3D models (within the field of 3D computer graphics) or illustrations (in visual art) by varying the level of darkness. [1] Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

Contents

In drawing

Shading is used traditionally in drawing for depicting a range of darkness by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. Light patterns, such as objects having light and shaded areas, help when creating the illusion of depth on paper. [2] [3]

There are various techniques of shading, including cross hatching, where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears.

Powder shading is a sketching shading method. In this style, stumping powder and paper stumps are used to draw a picture. (This is also in color.) The stumping powder is smooth and doesn't have any shiny particles. The paper to be used should have small grains on it so that the powder remains on the paper.

In computer graphics

Gouraud shading, developed by Henri Gouraud in 1971, was one of the first shading techniques developed for 3D computer graphics. Gouraud high.gif
Gouraud shading, developed by Henri Gouraud in 1971, was one of the first shading techniques developed for 3D computer graphics.
A knot shaded with different materials including aluminum, brass, bronze, copper, electrum, gold, iron, pewter, silver, clay, foil, glaze, plastic, rubber, satin, and velvet. created in Mathematica 13.1 Material shading table with knots like gold and silver in Mathematica.svg
A knot shaded with different materials including aluminum, brass, bronze, copper, electrum, gold, iron, pewter, silver, clay, foil, glaze, plastic, rubber, satin, and velvet. created in Mathematica 13.1

In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on things like (but not limited to) the surface's angle to lights, its distance from lights, its angle to the camera and material properties (e.g. bidirectional reflectance distribution function) to create a photorealistic effect.

Shading is performed during the rendering process by a program called a shader.

Surface angle to a light source

Shading alters the colors of faces in a 3D model based on the angle of the surface to a light source or light sources.

The first image below has the faces of the box rendered, but all in the same color. Edge lines have been rendered here as well which makes the image easier to see.

The second image is the same model rendered without edge lines. It is difficult to tell where one face of the box ends and the next begins.

The third image has shading enabled, which makes the image more realistic and makes it easier to see which face is which.

Shading1.png
Rendered image of a box. This image has no shading on its faces, but instead uses edge lines (also known as wireframe ) to separate the faces and a bolder outline to separate the object from the background.
Shading2.svg
This is the same object with the lines removed; the only indication of the interior geometry are the points of the object's silhouette.
Shading3.svg
This is the same object rendered with flat shading. The color of the 3 visible front faces has been set based on their angle (determined by the normal vector) to the light sources.

Types of lighting

Shading effects from a floodlight using a ray tracer Floodlight.png
Shading effects from a floodlight using a ray tracer

When a shader computes the result color, it uses a lighting model to determine the amount of light reflected at specific points on the surface. Different lighting models can be combined with different shading techniques — while lighting says how much light is reflected, shading determines how this information is used in order to compute the final result. It may for example compute lighting only at specific points and use interpolation to fill in the rest. The shader may also decide about how many light sources to take into account etc.

Ambient lighting

An ambient light source represents an omnidirectional, fixed-intensity and fixed-color light source that affects all objects in the scene equally (is omnipresent). During rendering, all objects in the scene are brightened with the specified intensity and color. This type of light source is mainly used to provide the scene with a basic view of the different objects in it. This is the simplest type of lighting to implement, and models how light can be scattered or reflected many times, thereby producing a uniform effect.

Ambient lighting can be combined with ambient occlusion to represent how exposed each point of the scene is, affecting the amount of ambient light it can reflect. This produces diffused, non-directional lighting throughout the scene, casting no clear shadows, but with enclosed and sheltered areas darkened. The result is usually visually similar to an overcast day.

Point lighting

Light originates from a single point and spreads outward in all directions.

Spotlighting

Models a spotlight: light originates from a single point and spreads outward in a cone.

Area lighting

Light originates from a small area on a single plane. (A more realistic model than a point light source.)

Directional lighting

A directional light source illuminates all objects equally from a given direction, like an area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff. This is like the sun.

Distance falloff

2squares-1.jpg
Two boxes rendered with OpenGL (Note that the color of the two front faces is the same even though one is farther away.)
2squares-2.jpg
The same model rendered using ARRIS CAD, which implements distance falloff to make surfaces that are closer to the eye brighter

Theoretically, two surfaces which are parallel are illuminated virtually the same amount from a distant unblocked light source such as the sun. The distance falloff effect produces images which have more shading and so would be realistic for proximal light sources.

The left image doesn't use distance falloff. Notice that the colors on the front faces of the two boxes are exactly the same. It may appear that there is a slight difference where the two faces directly overlap, but this is an optical illusion caused by the vertical edge below where the two faces meet.

The right image does use distance falloff. Notice that the front face of the closer box is brighter than the front face of the back box. Also, the floor goes from light to dark as it gets farther away.

Calculation

Distance falloff can be calculated in a number of ways:

  • Power of the distance – For a given point at a distance x from the light source, the light intensity received is proportional to 1/xn.
    • None (n = 0) – The light intensity received is the same regardless of the distance between the point and the light source.
    • Linear (n = 1) – For a given point at a distance x from the light source, the light intensity received is proportional to 1/x.
    • Quadratic (n = 2) – This is how light intensity decreases in reality if the light has a free path (i.e. no fog or any other thing in the air that can absorb or scatter the light). For a given point at a distance x from the light source, the light intensity received is proportional to 1/x2.
  • Any number of other mathematical functions may also be used.

Shading techniques

During shading a surface normal is often needed for lighting computation. The normals can be precomputed and stored for each vertex of the model.

Flat shading

Flat shading a textured cuboid Shading1.jpg
Flat shading a textured cuboid
Graphics complex of a seashell with flat shading modeled in Mathematica Flat Shading applied to seashell graphics complex in Mathematica.svg
Graphics complex of a seashell with flat shading modeled in Mathematica

Here, the lighting is evaluated only once for each polygon (usually for the first vertex in the polygon, but sometimes for the centroid for triangle meshes), based on the polygon's surface normal and on the assumption that all polygons are flat. The computed color is used for the whole polygon, making the corners look sharp. This is usually used when more advanced shading techniques are too computationally expensive. Specular highlights are rendered poorly with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is drawn uniformly over the entire face. If a specular highlight doesn't fall on the representative point, it is missed entirely. Consequently, the specular reflection component is usually not included in flat shading computation.

Smooth shading

In contrast to flat shading where the colors change discontinuously at polygon borders, with smooth shading the color changes from pixel to pixel, resulting in a smooth color transition between two adjacent polygons. Usually, values are first calculated in the vertices and bilinear interpolation is used to calculate the values of pixels between the vertices of the polygons. Types of smooth shading include Gouraud shading [4] and Phong shading. [5]

Gouraud shading
  1. Determine the normal at each polygon vertex.
  2. Apply an illumination model to each vertex to calculate the light intensity from the vertex normal.
  3. Interpolate the vertex intensities using bilinear interpolation over the surface polygon.

Problems:

  • Due to lighting being computed only at vertices, the inaccuracies (especially of specular highlights on large triangles) can become too apparent.
  • T-junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-junctions should be avoided.
Phong shading

Phong shading is similar to Gouraud shading, except that instead of interpolating the light intensities the normals are interpolated between the vertices and the lighting is evaluated per-pixel. Thus, the specular highlights are computed much more precisely than in the Gouraud shading model.

  1. Compute a normal N for each vertex of the polygon.
  2. Using bilinear interpolation compute a normal, Ni for each pixel. (Normal has to be renormalized each time.)
  3. Apply an illumination model to each pixel to calculate the light intensity from Ni.

Deferred shading

Deferred shading is a shading technique by which computation of shading is deferred to later stage by rendering in two passes, potentially increasing performance by not discarding expensively shaded pixels. The first pass only captures surface parameters (such as depth, normals and material parameters), the second one performs the actual shading and computes the final colors. [6] [7] [8] :884

Other approaches

Both Gouraud shading and Phong shading can be implemented using bilinear interpolation. Bishop and Weimer [9] proposed to use a Taylor series expansion of the resulting expression from applying an illumination model and bilinear interpolation of the normals. Hence, second-degree polynomial interpolation was used. This type of biquadratic interpolation was further elaborated by Barrera et al., [10] where one second-order polynomial was used to interpolate the diffuse light of the Phong reflection model and another second-order polynomial was used for the specular light.

Spherical linear interpolation (Slerp) was used by Kuij and Blake [11] for computing both the normal over the polygon, as well as the vector in the direction to the light source. A similar approach was proposed by Hast, [12] which uses quaternion interpolation of the normals with the advantage that the normal will always have unit length and the computationally heavy normalization is avoided.

Flat vs. smooth shading

FlatSmooth
Uses the same color for every pixel in a face – usually the color of the first vertexSmooth shading uses linear interpolation of either colors or normals between vertices
Edges appear more pronounced than they would on a real object because in reality almost all edges are somewhat roundThe edges disappear with this technique
Same color for any point of the faceEach point of the face has its own color
Individual faces are visualizedunderlying surface are visualized
Not suitable for smooth objectsSuitable for any objects
Less computationally expensiveMore computationally expensive

Computer vision

"Shape from shading" reconstruction Photometric stereo.png
"Shape from shading" reconstruction

In computer vision, some methods for 3D reconstruction are based on shading, or shape-from-shading. Based on an image's shading, a three-dimensional model can be reconstructed from a single photograph. [13]

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Gouraud shading</span> Interpolation method in computer graphics

Gouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes. In practice, Gouraud shading is most often used to achieve continuous lighting on triangle meshes by computing the lighting at the corners of each triangle and linearly interpolating the resulting colours for each pixel covered by the triangle. Gouraud first published the technique in 1971. However, enhanced hardware support for superior shading models has yielded Gouraud shading largely obsolete in modern rendering.

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. "Texture" in this context can be high frequency detail, surface texture, or color.

The Phong reflection model is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.

<span class="mw-page-title-main">Phong shading</span> Interpolation technique for surface shading

In 3D computer graphics, Phong shading, Phong interpolation, or normal-vector interpolation shading is an interpolation technique for surface shading invented by computer graphics pioneer Bui Tuong Phong. Phong shading interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

<span class="mw-page-title-main">Normal mapping</span> Texture mapping technique

In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

<span class="mw-page-title-main">Lightmap</span> Data structure used in lightmapping

A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.

<span class="mw-page-title-main">Reflection mapping</span>

In computer graphics, reflection mapping or environment mapping is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

The Blinn–Phong reflection model, also called the modified Phong reflection model, is a modification developed by Jim Blinn to the Phong reflection model.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Vertex normal</span>

In the geometry of computer graphics, a vertex normal at a vertex of a polyhedron is a directional vector associated with a vertex, intended as a replacement to the true geometric normal of the surface. Commonly, it is computed as the normalized average of the surface normals of the faces that contain that vertex. The average can be weighted for example by the area of the face or it can be unweighted. Vertex normals can also be computed for polygonal approximations to surfaces such as NURBS, or specified explicitly for artistic purposes. Vertex normals are used in Gouraud shading, Phong shading and other lighting models. Using vertex normals, much smoother shading than flat shading can be achieved; however, without some modifications to topology such a support loops, it cannot produce a sharper edge.

<span class="mw-page-title-main">Vertex (computer graphics)</span>

A vertex in computer graphics is a data structure that describes certain attributes, like the position of a point in 2D or 3D space, or multiple points on a surface.

In the field of 3D computer graphics, Multiple Render Targets, or MRT, is a feature of modern graphics processing units (GPUs) that allows the programmable rendering pipeline to render images to multiple render target textures at once. These textures can then be used as inputs to other shaders or as texture maps applied to 3D models. Introduced by OpenGL 2.0 and Direct3D 9, MRT can be invaluable to real-time 3D applications such as video games. Before the advent of MRT, a programmer would have to issue a command to the GPU to draw the 3D scene once for each render target texture, resulting in redundant vertex transformations which, in a real-time program expected to run as fast as possible, can be quite time-consuming. With MRT, a programmer creates a pixel shader that returns an output value for each render target. This pixel shader then renders to all render targets with a single draw command.

PICA200 is a graphics processing unit (GPU) designed by Digital Media Professionals Inc. (DMP), a Japanese GPU design startup company, for use in embedded devices such as vehicle systems, mobile phones, cameras, and game consoles. The PICA200 is an IP Core which can be licensed to other companies to incorporate into their SOCs. It was most notably licensed for use in the Nintendo 3DS.

This is a glossary of terms relating to computer graphics.

References

  1. "Graphics: Shading". hexianghu.com. Retrieved 2019-09-10.
  2. "Drawing Techniques". Drawing With Confidence. Archived from the original on November 24, 2012. Retrieved 19 September 2012.
  3. "Shading Tutorial, How to Shade in Drawing". Dueysdrawings.com. 2007-06-21. Retrieved 2012-02-11.
  4. Gouraud, Henri (1971). "Continuous shading of curved surfaces". IEEE Transactions on Computers. C-20 (6): 623–629. doi:10.1109/T-C.1971.223313. S2CID   123827991.
  5. B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317. (PDF)
  6. "Forward Rendering vs. Deferred Rendering".
  7. "LearnOpenGL - Deferred Shading".
  8. Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty (2018). Real-Time Rendering (Fourth ed.). ISBN   978-1-1386-2700-0.
  9. Gary Bishop and David M. Weimer. 1986. Fast Phong shading. SIGGRAPH Comput. Graph. 20, 4 (August 1986), 103–106.
  10. T. Barrera, A. Hast, E. Bengtsson. Fast Near Phong-Quality Software Shading. WSCG'06, pp. 109–116. 2006
  11. Kuijk, A. A. M. and E. H. Blake, Faster Phong shading via angular interpolation. Computer Graphics Forum 8(4):315–324. 1989 (PDF)
  12. A. Hast. Shading by Quaternion Interpolation. WSCG'05. pp. 53–56. 2005.
  13. Horn, Berthold K.P. "Shape from shading: A method for obtaining the shape of a smooth opaque object from one view." (1970). (PDF)

Further reading