3D rendering

Last updated

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

Contents

Rendering methods

A photorealistic 3D render of 6 computer fans using radiosity rendering, DOF and procedural materials 80mm fan.jpg
A photorealistic 3D render of 6 computer fans using radiosity rendering, DOF and procedural materials

Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. [1] Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photorealistic rendering, or real-time rendering. [2]

Real-time

A screenshot from Second Life, a 2003 online virtual world which renders frames in real-time Yellow Submarine Second Life.png
A screenshot from Second Life , a 2003 online virtual world which renders frames in real-time

Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second (a.k.a. "in one frame": In the case of a 30 frame-per-second animation, a frame encompasses one 30th of a second).

The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result, the final image presented is not necessarily that of the real world, but one close enough for the human eye to tolerate.

Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera. This is the basic method employed in games, interactive worlds and VRML.

The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU. [3]

Non-real-time

Computer-generated image (CGI) created by Gilles Tran Glasses 800 edit.png
Computer-generated image (CGI) created by Gilles Tran

Animations for non-interactive media, such as feature films and video, can take much more time to render. [4] Non-real-time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk, then transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second (fps), to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing, path tracing, photon mapping or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects, such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system given the costs involved when using render farms. [5] The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Reflection and shading models

Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied almost exclusively within the context of rendering. Modern 3D computer graphics rely heavily on a simplified reflection model called the Phong reflection model (not to be confused with Phong shading). In the refraction of light, an important concept is the refractive index; in most 3D programming implementations, the term for this value is "index of refraction" (usually shortened to IOR).

Shading can be broken down into two different techniques, which are often studied independently:

Surface shading algorithms

Popular surface shading algorithms in 3D computer graphics include:

Reflection

The Utah teapot with green lighting Utah teapot.png
The Utah teapot with green lighting

Reflection or scattering is the relationship between the incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. [8]

Shading

Shading addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. [9] A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail.

Some shading techniques include:

Transport

Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

Projection

Perspective projection Perspective Projection Principle.jpg
Perspective projection

The shaded three-dimensional objects must be flattened so that the display device - namely a monitor - can display it in only two dimensions, this process is called 3D projection. This is done using projection and, for most applications, perspective projection. The basic idea behind perspective projection is that objects that are further away are made smaller in relation to those that are closer to the eye. Programs produce perspective by multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation constant of one means that there is no perspective. High dilation constants can cause a "fish-eye" effect in which image distortion begins to occur. Orthographic projection is used mainly in CAD or CAM applications where scientific modeling requires precise measurements and preservation of the third dimension.

Rendering engines

Render engines may come together or be integrated with 3D modeling software but there is standalone software as well. Some render engines are compatible with multiple 3D software, while some are exclusive to one. It's is the one responsible for the transformation of the prepared 3D scene into a 2D image or animation. 3D render engines can be based on different methods, such as ray-tracing, rasterization, path-tracing, also depending on the speed and the outcome expected, it comes in different types – real-time and non real-time, which was described above

Hand-Drawn Perspectives and Sketches - Curated by Sarbjit Bahga Hand-Drawn Perspectives and Sketches - Curated by Sarbjit Bahga.jpg
Hand-Drawn Perspectives and Sketches - Curated by Sarbjit Bahga

Assets

CAD libraries can have assets such as 3D models, textures, bump maps, HDRIs, and different Computer graphics lighting sources to be rendered. [11]

See also

Notes and references

  1. Badler, Norman I. "3D Object Modeling Lecture Series" (PDF). University of North Carolina at Chapel Hill . Archived (PDF) from the original on 2013-03-19.
  2. "Non-Photorealistic Rendering". Duke University . Retrieved 2018-07-23.
  3. "The Science of 3D Rendering". The Institute for Digital Archaeology. Retrieved 2019-01-19.
  4. Christensen, Per H.; Jarosz, Wojciech. "The Path to Path-Traced Movies" (PDF). Archived (PDF) from the original on 2019-06-26.
  5. "How render farm pricing actually works". GarageFarm. 2021-10-24. Retrieved 2021-10-24.
  6. Gouraud shading - PCMag
  7. Phong Shading - Techopedia
  8. "Fundamentals of Rendering - Reflectance Functions" (PDF). Ohio State University . Archived (PDF) from the original on 2017-06-11.
  9. The word shader is sometimes also used for programs that describe local geometric variation.
  10. "Bump Mapping". web.cs.wpi.edu. Retrieved 2018-07-23.
  11. lephare (2023-05-09). "12 Best SketchUp Rendering Plugins and Software for 2023". Cedreo. Retrieved 2024-08-19.

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Global illumination</span> Group of rendering algorithms used in 3D computer graphics

Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

<span class="mw-page-title-main">Ray tracing (graphics)</span> Rendering method

In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.

The RenderMan Interface Specification, or RISpec in short, is an open API developed by Pixar Animation Studios to describe three-dimensional scenes and turn them into digital photorealistic images. It includes the RenderMan Shading Language.

<span class="mw-page-title-main">Phong shading</span> Interpolation technique for surface shading

In 3D computer graphics, Phong shading, Phong interpolation, or normal-vector interpolation shading is an interpolation technique for surface shading invented by computer graphics pioneer Bui Tuong Phong. Phong shading interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.

<span class="mw-page-title-main">Shading</span> Depicting depth through varying levels of darkness

Shading refers to the depiction of depth perception in 3D models or illustrations by varying the level of darkness. Shading tries to approximate local behavior of light on the object's surface and is not to be confused with techniques of adding shadows, such as shadow mapping or shadow volumes, which fall under global behavior of light.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

<span class="mw-page-title-main">Non-photorealistic rendering</span> Style of rendering

Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art, in contrast to traditional computer graphics, which focuses on photorealism. NPR is inspired by other artistic modes such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of cel-shaded animation as well as in scientific visualization, architectural illustration and experimental animation.

<span class="mw-page-title-main">Ambient occlusion</span> Computer graphics shading and rendering technique

In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.

<span class="mw-page-title-main">Subsurface scattering</span> Mechanism of light transport

Subsurface scattering (SSS), also known as subsurface light transport (SSLT), is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface potentially at a different point. Light generally penetrates the surface and gets scattered a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Reflection (computer graphics)</span> Simulation of reflective surfaces

Reflection in computer graphics is used to render reflective objects like mirrors and shiny surfaces.

<span class="mw-page-title-main">3D computer graphics</span> Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.

<span class="mw-page-title-main">Computer graphics (computer science)</span> Sub-field of computer science

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Physically based rendering</span> Computer graphics technique

Physically based rendering (PBR) is a computer graphics approach that seeks to render images in a way that models the lights and surfaces with optics in the real world. It is often referred to as "Physically Based Lighting" or "Physically Based Shading". Many PBR pipelines aim to achieve photorealism. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. PBR principles may be implemented in real-time applications using Shaders or offline applications using ray tracing or path tracing.