Rendering (computer graphics)

Last updated
A variety of rendering techniques applied to a single 3D scene Render Types.png
A variety of rendering techniques applied to a single 3D scene
An image created by using POV-Ray 3.6 Glasses 800 edit.png
An image created by using POV-Ray 3.6

Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model (or models in what collectively could be called a scene file) by means of computer programs. Also, the results of displaying such a model can be called a render. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.

Non-photorealistic rendering

Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art. In contrast to traditional computer graphics, which has focused on photorealism, NPR is inspired by artistic styles such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of "toon shading", as well as in scientific visualization, architectural illustration and experimental animation. An example of a modern use of this method is that of cel-shaded animation.

Computer program Instructions to be executed by a computer

A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function.

Data structure particular way of storing and organizing data in a computer

In computer science, a data structure is a data organization, management and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data.

Contents

Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output.

In computer graphics, a computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen. Once a 3D model has been created, for instance in a video game or any other 3D computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays.   Because the steps required for this operation depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases. However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator. These APIs abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators.

Graphics processing unit specialized electronic circuit; graphics accelerator

A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.

Central processing unit electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions

A central processing unit (CPU), also called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.

3D computer graphics graphics that use a three-dimensional representation of geometric data

3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development.

Architectural rendering

Architectural rendering, or architectural illustration, is the art of creating two-dimensional images or animations showing the attributes of a proposed architectural design.

Video game electronic game that involves interaction with a user interface to generate visual feedback on a video device such as a TV screen or computer monitor

A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a two- or three-dimensional video display device such as a TV screen, virtual reality headset or computer monitor. Since the 1980s, video games have become an increasingly important part of the entertainment industry, and whether they are also a form of art is a matter of dispute.

A simulation is an approximate imitation of the operation of a process or system; the act of simulating first requires a model is developed. This model is a well-defined description of the simulated subject, and represents its key characteristics, such as its behaviour, functions and abstract or physical properties. The model represents the system itself, whereas the simulation represents its operation over time.

In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.

Real-time computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Usage

When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees.

Bitmap textures are digital images representing a surface, a material, a pattern or even a picture, generated by an artist or designer using a bitmap editor software such as Adobe Photoshop or Gimp or simply by scanning an image and, if necessary, retouching it on a personal computer.

Bump mapping demo effect

Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978.

For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.

Features

A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.

Techniques

Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.

Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted.

Therefore, a few loose families of more-efficient light transport modelling techniques have emerged:

The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques.

Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.

Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.

Scanline rendering and rasterisation

Rendering of the European Extremely Large Telescope. Latest Rendering of the E-ELT.jpg
Rendering of the European Extremely Large Telescope.

A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives.

If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards.

Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization.

The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.

Ray casting

In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged.

Ray casting involves calculating the "view direction" (from camera position), and incrementally following along that "ray cast" through "solid 3d objects" in the scene, while accumulating the resulting value from each point in 3D space. This is related and similar to "ray tracing" except that the raycast is usually not "bounced" off surfaces (where the "ray tracing" indicates that it is tracing out the lights path including bounces). "Ray casting" implies that the light ray is following a straight path (which may include travelling through semi-transparent objects). The ray cast is a vector that can originate from the camera or from the scene endpoint ("back to front", or "front to back"). Sometimes the final light value is a derived from a "transfer function" and sometimes it's used directly.

Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.

Ray tracing

Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language. SpiralSphereAndJuliaDetail1.jpg
Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language.

Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects. [1]

In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.

Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.

In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.

As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.

However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.

Radiosity

Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.

The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.

The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithm, images may exhibit convincing realism, particularly for indoor scenes.

In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model.

Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some digital artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture.

Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame.

Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.

Sampling and filtering

One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.

If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.

Optimization

Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.

For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.

Academic core

The implementation of a realistic renderer always has some basic element of physical simulation or emulation some computation which resembles or abstracts a real physical process.

The term " physically based " indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.

The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.

Rendering research is concerned with both the adaptation of scientific models and their efficient application.

The rendering equation

This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.

Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' all the movement of light — in a scene.

The bidirectional reflectance distribution function

The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:

Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.

Geometric optics

Rendering is practically exclusively concerned with the particle aspect of light physics known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.

Visual perception

Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping.

Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods.

Rendering for movies often takes place on a network of tightly connected computers known as a render farm.

The current[ when? ] state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and RenderMan Shading Language designed at Pixar. [2] (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators).

Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include interactive photorealistic rendering  (IPR) and hardware rendering/shading.

Some renderers execute on the GPU instead of the CPU (e.g. FurryBall, Redshift, Octane). The parallelized nature of GPUs can be used for shorter render times. However, GPU renderers are constrained by the amount of video memory available.

Chronology of important published ideas

Rendering of an ESTCube-1 satellite ESTCube orbiidil 2.jpg
Rendering of an ESTCube-1 satellite

See also

Related Research Articles

Global illumination Group of rendering algorithms used in 3D computer graphics

Global illumination, or indirect illumination, is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

Radiosity (computer graphics) Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

Ray tracing (graphics) rendering method

In computer graphics, ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where taking a relatively long time to render a frame can be tolerated, such as in still images and film and television visual effects, and more poorly suited for real-time applications such as video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena.

Scanline rendering rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Rasterisation is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterised image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterisation may refer to either the conversion of models into raster files, or the conversion of 2D rendering primitives such as polygons or line segments into a rasterized format.

In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of light such as spectral rendering.

Shading depicting depth through varying levels of darkness

Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.

Ray casting

Ray casting is the use of ray–surface intersection tests to solve a variety of problems in computer graphics and computational geometry. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering constructive solid geometry models.

Volume rendering set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

Reyes rendering is a computer software architecture used in 3D computer graphics to render photo-realistic images. It was developed in the mid-1980s by Loren Carpenter and Robert L. Cook at Lucasfilm's Computer Graphics Research Group, which is now Pixar. It was first used in 1982 to render images for the Genesis effect sequence in the movie Star Trek II: The Wrath of Khan. Pixar's RenderMan is one implementation of the Reyes algorithm. According to the original paper describing the algorithm, the Reyes image rendering system is "An architecture ... for fast high-quality rendering of complex images." Reyes was proposed as a collection of algorithms and data processing systems. However, the terms "algorithm" and "architecture" have come to be used synonymously and are used interchangeably in this article.

Ambient occlusion Computer graphics shading and rendering technique

In computer graphics, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and the deeper you go inside the tube, the more occluded the lighting becomes. Ambient occlusion can be seen as an accessibility value that is calculated for each surface point. In scenes with open sky this is done by estimating the amount of visible sky for each point, while in indoor environments only objects within a certain radius are taken into account and the walls are assumed to be the origin of the ambient light. The result is a diffuse, non-directional shading effect that casts no clear shadows but that darkens enclosed and sheltered areas and can affect the rendered image's overall tone. It is often used as a post-processing effect.

Cone tracing and beam tracing are a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with thick rays.

Path tracing

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

3D rendering

3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic rendering.

Computer graphics lighting refers to the simulation of light in computer graphics. This simulation can either be extremely accurate, as is the case in an application like Radiance which attempts to track the energy flow of light interacting with materials using radiosity computational techniques. Alternatively, the simulation can simply be inspired by light physics, as is the case with non-photorealistic rendering. In both cases, a shading model is used to describe how surfaces respond to light. Between these two extremes, there are many different rendering approaches which can be employed to achieve almost any desired visual result.

False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel.

Quake Wars: Ray Traced

Quake Wars: Ray Traced is a research project from Intel Corporation that applied a ray tracing renderer to the game content of Enemy Territory: Quake Wars. The possibility of using ray tracing for this game in real-time has been demonstrated first on servers and after further progress later on workstations and last on an early prototype of the Larrabee hardware. After Quake 3: Ray Traced and Quake 4: Ray Traced this is the third large project that embeds this technique in a modern game for research purposes of alternative rendering algorithms. A successor of this project Wolfenstein: Ray Traced has been created in 2010.

This is a glossary of terms relating to computer graphics.

References

  1. "Relativistic Ray-Tracing: Simulating the Visual Appearance of Rapidly Moving Objects". 1995. CiteSeerX   10.1.1.56.830 .
  2. Raghavachary, Saty (30 July 2006). "A brief introduction to RenderMan". ACM SIGGRAPH 2006 Courses on - SIGGRAPH '06. ACM. p. 2. doi:10.1145/1185657.1185817. ISBN   978-1595933645 . Retrieved 7 May 2018 via dl.acm.org.
  3. Appel, A. (1968). "Some techniques for shading machine renderings of solids" (PDF). Proceedings of the Spring Joint Computer Conference. 32. pp. 37–49. Archived (PDF) from the original on 2012-03-13.
  4. Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM. 13 (9): 527–536. doi:10.1145/362736.362739.
  5. Gouraud, H. (1971). "Continuous shading of curved surfaces" (PDF). IEEE Transactions on Computers. 20 (6): 623–629. doi:10.1109/t-c.1971.223313. Archived from the original (PDF) on 2010-07-02.
  6. 1 2 3 4 University of Utah School of Computing, http://www.cs.utah.edu/school/history/#phong-ref Archived 2013-09-03 at the Wayback Machine
  7. Phong, B-T (1975). "Illumination for computer generated pictures" (PDF). Communications of the ACM. 18 (6): 311–316. CiteSeerX   10.1.1.330.4718 . doi:10.1145/360825.360839. Archived from the original (PDF) on 2012-03-27.
  8. Bui Tuong Phong, Illumination for computer generated pictures Archived 2016-03-20 at the Wayback Machine , Communications of ACM 18 (1975), no. 6, 311–317.
  9. 1 2 Putas. "The way to home 3d". vintage3d.org. Archived from the original on 15 December 2017. Retrieved 7 May 2018.
  10. 1 2 Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces (PDF) (PhD thesis). University of Utah. Archived (PDF) from the original on 2014-11-14.
  11. Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM. 19 (10): 542–546. CiteSeerX   10.1.1.87.8903 . doi:10.1145/360349.360353.
  12. Blinn, James F. (20 July 1977). "Models of light reflection for computer synthesized pictures". ACM SIGGRAPH Computer Graphics. 11 (2): 192–198. doi:10.1145/965141.563893 via dl.acm.org.
  13. "Bomber - Videogame by Sega". www.arcade-museum.com. Retrieved 7 May 2018.
  14. Crow, F.C. (1977). "Shadow algorithms for computer graphics" (PDF). Computer Graphics (Proceedings of SIGGRAPH 1977). 11. pp. 242–248. Archived from the original (PDF) on 2012-01-13. Retrieved 2011-07-15.
  15. Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978). 12. pp. 270–274. CiteSeerX   10.1.1.134.8225 .
  16. Blinn, J.F. (1978). Simulation of wrinkled surfaces (PDF). Computer Graphics (Proceedings of SIGGRAPH 1978). 12. pp. 286–292. Archived (PDF) from the original on 2012-01-21.
  17. Wolf, Mark J. P. (15 June 2012). Before the Crash: Early Video Game History. Wayne State University Press. ISBN   978-0814337226 . Retrieved 7 May 2018 via Google Books.
  18. Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980). 14. pp. 124–133. CiteSeerX   10.1.1.112.4406 .
  19. Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM. 23 (6): 343–349. CiteSeerX   10.1.1.114.7629 . doi:10.1145/358876.358882.
  20. Purcaru, Bogdan Ion (13 March 2014). "Games vs. Hardware. The History of PC video games: The 80's". Purcaru Ion Bogdan. Retrieved 7 May 2018 via Google Books.
  21. "System 16 - Sega VCO Object Hardware (Sega)". www.system16.com. Archived from the original on 5 April 2016. Retrieved 7 May 2018.
  22. Cook, R.L.; Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981). 15. pp. 307–316. CiteSeerX   10.1.1.88.7796 .
  23. Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983). 17. pp. 1–11. CiteSeerX   10.1.1.163.6298 .
  24. Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications. 4 (10): 15–22. doi:10.1109/mcg.1984.6429331.
  25. Porter, T.; Duff, T. (1984). Compositing digital images (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 253–259. Archived (PDF) from the original on 2015-02-16.
  26. Cook, R.L.; Porter, T.; Carpenter, L. (1984). Distributed ray tracing (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 137–145.[ permanent dead link ]
  27. Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984). 18. pp. 213–222. CiteSeerX   10.1.1.112.356 .
  28. "Archived copy". Archived from the original on 2016-03-04. Retrieved 2016-08-08.CS1 maint: Archived copy as title (link)
  29. Cohen, M.F.; Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 1985). 19. pp. 31–40. doi:10.1145/325165.325171. Archived (PDF) from the original on 2014-04-24.
  30. Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX   10.1.1.31.581 .
  31. Kajiya, J. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986). 20. pp. 143–150. CiteSeerX   10.1.1.63.1402 .
  32. Cook, R.L.; Carpenter, L.; Catmull, E. (1987). The Reyes image rendering architecture (PDF). Computer Graphics (Proceedings of SIGGRAPH 1987). 21. pp. 95–102. Archived (PDF) from the original on 2011-07-15.
  33. 1 2 3 "Archived copy". Archived from the original on 2014-10-03. Retrieved 2014-10-02.CS1 maint: Archived copy as title (link)
  34. Wu, Xiaolin (July 1991). An efficient antialiasing technique. Computer Graphics . 25. pp. 143–152. doi:10.1145/127719.122734. ISBN   978-0-89791-436-9.
  35. Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (Ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN   978-0-12-064480-3.CS1 maint: Extra text: editors list (link)
  36. Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991). 25. pp. 197–206. CiteSeerX   10.1.1.93.5694 .
  37. "IGN Presents the History of SEGA". ign.com. 21 April 2009. Archived from the original on 16 March 2018. Retrieved 7 May 2018.
  38. "System 16 - Sega Model 2 Hardware (Sega)". www.system16.com. Archived from the original on 21 December 2010. Retrieved 7 May 2018.
  39. 1 2 3 4 "System 16 - Namco Magic Edge Hornet Simulator Hardware (Namco)". www.system16.com. Archived from the original on 12 September 2014. Retrieved 7 May 2018.
  40. M. Oren and S.K. Nayar, "Generalization of Lambert's Reflectance Model Archived 2010-02-15 at the Wayback Machine ". SIGGRAPH. pp.239-246, Jul, 1994
  41. Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images" (PDF). IEEE Computer Graphics & Applications. 13 (6): 42–48. doi:10.1109/38.252554. Archived (PDF) from the original on 2011-12-08.
  42. Hanrahan, P.; Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993). 27. pp. 165–174. CiteSeerX   10.1.1.57.9761 .
  43. Miller, Gavin (24 July 1994). "Efficient algorithms for local and global accessibility shading". Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94. ACM. pp. 319–326. doi:10.1145/192161.192244. ISBN   978-0897916677 . Retrieved 7 May 2018 via dl.acm.org.
  44. "Archived copy" (PDF). Archived (PDF) from the original on 2016-10-11. Retrieved 2016-08-08.CS1 maint: Archived copy as title (link)
  45. Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics. 19 (2): 215–224. CiteSeerX   10.1.1.97.2724 . doi:10.1016/0097-8493(94)00145-o.
  46. "System 16 - Sega Model 3 Step 1.0 Hardware (Sega)". www.system16.com. Archived from the original on 6 October 2014. Retrieved 7 May 2018.
  47. Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997). 16. pp. 65–76. CiteSeerX   10.1.1.88.944 .
  48. Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997). 24. pp. 49–56. CiteSeerX   10.1.1.15.240 .
  49. https://web.archive.org/web/20070811102018/http://www3.sharkyextreme.com/hardware/reviews/video/neon250/2.shtml
  50. Lewis, J. P.; Cordner, Matt; Fong, Nickson (1 July 2000). "Pose space deformation". Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. ACM Press/Addison-Wesley Publishing Co. pp. 165–172. doi:10.1145/344779.344862. ISBN   978-1581132083 . Retrieved 7 May 2018 via dl.acm.org.
  51. Sloan, P.; Kautz, J.; Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 2002). 29. pp. 527–536. Archived from the original (PDF) on 2011-07-24.

Further reading