Rendering (computer graphics)

Last updated
An image rendered using POV-Ray 3.6 Glasses 800 edit.png
An image rendered using POV-Ray 3.6
An architectural visualization rendered in multiple styles using Blender Architectural render 02 (Blender).jpg
An architectural visualization rendered in multiple styles using Blender

Rendering is the process of generating a photorealistic or non-photorealistic image from input data such as 3D models. The word "rendering" (in one of its senses) originally meant the task performed by an artist when depicting a real or imaginary thing (the finished artwork is also called a "rendering"). Today, to "render" commonly means to generate an image or video from a precise description (often created by an artist) using a computer program. [1] [2] [3] [4]

Contents

A software application or component that performs rendering is called a rendering engine , [5] render engine, rendering system , graphics engine, or simply a renderer.

A distinction is made between real-time rendering, in which images are generated and displayed immediately (ideally fast enough to give the impression of motion or animation), and offline rendering (sometimes called pre-rendering) in which images, or film or video frames, are generated for later viewing. Offline rendering can use a slower and higher-quality renderer. Interactive applications such as games must primarily use real-time rendering, although they may incorporate pre-rendered content.

Rendering can produce images of scenes or objects defined using coordinates in 3D space, seen from a particular viewpoint. Such 3D rendering uses knowledge and ideas from optics, the study of visual perception, mathematics, and software engineering, and it has applications such as video games, simulators, visual effects for films and television, design visualization, and medical diagnosis. Realistic 3D rendering requires finding approximate solutions to the rendering equation, which describes how light propagates in an environment.

Real-time rendering uses high-performance rasterization algorithms that process a list of shapes and determine which pixels are covered by each shape. When more realism is required (e.g. for architectural visualization or visual effects) slower pixel-by-pixel algorithms such as ray tracing are used instead. (Ray tracing can also be used selectively during rasterized rendering to improve the realism of lighting and reflections.) A type of ray tracing called path tracing is currently the most common technique for photorealistic rendering. Path tracing is also popular for generating high-quality non-photorealistic images, such as frames for 3D animated films. Both rasterization and ray tracing can be sped up ("accelerated") by specially designed microprocessors called GPUs.

Rasterization algorithms are also used to render images containing only 2D shapes such as polygons and text. Applications of this type of rendering include digital illustration, graphic design, 2D animation, desktop publishing and the display of user interfaces.

Historically, rendering was called image synthesis [6] :xxi but today this term is likely to mean AI image generation. [7] The term "neural rendering" is sometimes used when a neural network is the primary means of generating an image but some degree of control over the output image is provided. [8] Neural networks can also assist rendering without replacing traditional algorithms, e.g. by removing noise from path traced images.

Features

A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.

Inputs

Before a 3D scene or 2D image can be rendered, it must be described in a way that the rendering software can understand. Historically, inputs for both 2D and 3D rendering were usually text files, which are easier than binary files for humans to edit and debug. For 3D graphics, text formats have largely been supplanted by more efficient binary formats, and by APIs which allow interactive applications to communicate directly with a rendering component without generating a file on disk (although a scene description is usually still created in memory prior to rendering). [9] :1.2, 3.2.6, 3.3.1, 3.3.7

Traditional rendering algorithms use geometric descriptions of 3D scenes or 2D images. Applications and algorithms that render visualizations of data scanned from the real world, or scientific simulations, may require different types of input data.

The PostScript format (which is often credited with the rise of desktop publishing) provides a standardized, interoperable way to describe 2D graphics and page layout. The Scalable Vector Graphics (SVG) format is also text-based, and the PDF format uses the PostScript language internally. In contrast, although many 3D graphics file formats have been standardized (including text-based formats such as VRML and X3D), different rendering applications typically use formats tailored to their needs, and this has led to a proliferation of proprietary and open formats, with binary files being more common. [9] :3.2.3, 3.2.5, 3.3.7 [10] :vii [11] [12] :16.5.2. [13]

2D vector graphics

A vector graphics image description may include: [10] [11]

3D geometry

A geometric scene description may include: [9] :Ch. 4-7, 8.7 [14]

Many file formats exist for storing individual 3D objects or "models". These can be imported into a larger scene, or loaded on-demand by rendering software or games. A realistic scene may require hundreds of items like household objects, vehicles, and trees, and 3D artists often utilize large libraries of models. In game production, these models (along with other data such as textures, audio files, and animations) are referred to as "assets". [13] [15] :Ch. 4

Volumetric data

Scientific and engineering visualization often requires rendering volumetric data generated by 3D scans or simulations. Perhaps the most common source of such data is medical CT and MRI scans, which need to be rendered for diagnosis. Volumetric data can be extremely large, and requires specialized data formats to store it efficiently, particularly if the volume is sparse (with empty regions that do not contain data). [16] :14.3.1 [17] [18]

Before rendering, level sets for volumetric data can be extracted and converted into a mesh of triangles, e.g. by using the marching cubes algorithm. Algorithms have also been developed that work directly with volumetric data, for example to render realistic depictions of the way light is scattered and absorbed by clouds and smoke, and this type of volumetric rendering is used extensively in visual effects for movies. When rendering lower-resolution volumetric data without interpolation, the individual cubes or "voxels" may be visible, an effect sometimes used deliberately for game graphics. [19] :4.6 [16] :13.10, Ch. 14, 16.1

Photogrammetry and scanning

Photographs of real world objects can be incorporated into a rendered scene by using them as textures for 3D objects. Photos of a scene can also be stitched together to create panoramic images or environment maps, which allow the scene to be rendered very efficiently but only from a single viewpoint. Scanning of real objects and scenes using structured light or lidar produces point clouds consisting of the coordinates of millions of individual points in space, sometimes along with color information. These point clouds may either be rendered directly or converted into meshes before rendering. (Note: "point cloud" sometimes also refers to a minimalist rendering style that can be used for any 3D geometry, similar to wireframe rendering.) [16] :13.3, 13.9 [9] :1.3

Neural approximations and light fields

A more recent, experimental approach is description of scenes using radiance fields which define the color, intensity, and direction of incoming light at each point in space. (This is conceptually similar to, but not identical to, the light field recorded by a hologram.) For any useful resolution, the amount of data in a radiance field is so large that it is impractical to represent it directly as volumetric data, and an approximation function must be found. Neural networks are typically used to generate and evaluate these approximations, sometimes using video frames, or a collection of photographs of a scene taken at different angles, as "training data". [20] [21]

Algorithms related to neural networks have recently been used to find approximations of a scene as 3D Gaussians. The resulting representation is similar to a point cloud, except that it uses fuzzy, partially-transparent blobs of varying dimensions and orientations instead of points. As with neural radiance fields, these approximations are often generated from photographs or video frames. [22]

Outputs

The output of rendering may be displayed immediately on the screen (many times a second, in the case of real-time rendering such as games) or saved in a raster graphics file format such as JPEG or PNG. High-end rendering applications commonly use the OpenEXR file format, which can represent finer gradations of colors and high dynamic range lighting, allowing tone mapping or other adjustments to be applied afterwards without loss of quality. [23] [24] :Ch. 14, Ap. B

Quickly rendered animations can be saved directly as video files, but for high-quality rendering, individual frames (which may be rendered by different computers in a cluster or render farm and may take hours or even days to render) are output as separate files and combined later into a video clip. [25] [15] :1.5, 3.11, 8.11

The output of a renderer sometimes includes more than just RGB color values. For example, the spectrum can be sampled using multiple wavelengths of light, or additional information such as depth (distance from camera) or the material of each point in the image can be included (this data can be used during compositing or when generating texture maps for real-time rendering, or used to assist in removing noise from a path-traced image). Transparency information can be included, allowing rendered foreground objects to be composited with photographs or video. It is also sometimes useful to store the contributions of different lights, or of specular and diffuse lighting, as separate channels, so lighting can be adjusted after rendering. The OpenEXR format allows storing many channels of data in a single file. [23] [24] :Ch. 14, Ap. B

Techniques

Choosing how to render a 3D scene usually involves trade-offs between speed, memory usage, and realism (although realism is not always desired). The algorithms developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased. Multiple techniques may be used for a single final image.

An important distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. For simple scenes, object order is usually more efficient, as there are fewer objects than pixels. [26] :Ch. 4

2D vector graphics
The vector displays of the 1960s-1970s used deflection of an electron beam to draw line segments directly on the screen. Nowadays, vector graphics are rendered by rasterization algorithms that also support filled shapes. In principle, any 2D vector graphics renderer can be used to render 3D objects by first projecting them onto a 2D image plane. [27] :93, 431, 505, 553
3D rasterization
Adapts 2D rasterization algorithms so they can be used more efficiently for 3D rendering, handling hidden surface removal via scanline or z-buffer techniques. Different realistic or stylized effects can be obtained by coloring the pixels covered by the objects in different ways. Surfaces are typically divided into meshes of triangles before being rasterized. Rasterization is usually synonymous with "object order" rendering (as described above). [27] :560-561, 575-590 [9] :8.5 [26] :Ch. 9
Ray casting
Uses geometric formulas to compute the first object that a ray intersects. [28] :8 It can be used to implement "image order" rendering by casting a ray for each pixel, and finding a corresponding point in the scene. Ray casting is a fundamental operation used for both graphical and non-graphical purposes, [29] :6 e.g. determining whether a point is in shadow, or checking what an enemy can see in a game.
Ray tracing
Simulates the bouncing paths of light caused by specular reflection and refraction, requiring a varying number of ray casting operations for each path. Advanced forms use Monte Carlo techniques to render effects such as area lights, depth of field, blurry reflections, and soft shadows, but computing global illumination is usually in the domain of path tracing. [28] :9-13 [30]
Radiosity
A finite element analysis approach that breaks surfaces in the scene into pieces, and estimates the amount of light that each piece receives from light sources, or indirectly from other surfaces. Once the irradiance of each surface is known, the scene can be rendered using rasterization or ray tracing. [6] :888-890, 1044-1045
Path tracing
Uses Monte Carlo integration with a simplified form of ray tracing, computing the average brightness of a sample of the possible paths that a photon could take when traveling from a light source to the camera (for some images, thousands of paths need to be sampled per pixel [29] :8). It was introduced as a statistically unbiased way to solve the rendering equation, giving ray tracing a rigorous mathematical foundation. [31] [28] :11-13

Each of the above approaches has many variations, and there is some overlap. Path tracing may be considered either a distinct technique or a particular type of ray tracing. [6] :846, 1021 Note that the usage of terminology related to ray tracing and path tracing has changed significantly over time. [28] :7

Rendering of a fractal terrain by ray marching Real-time Raymarched Terrain.png
Rendering of a fractal terrain by ray marching

Ray marching is a family of algorithms, used by ray casting, for finding intersections between a ray and a complex object, such as a volumetric dataset or a surface defined by a signed distance function. It is not, by itself, a rendering method, but it can be incorporated into ray tracing and path tracing, and is used by rasterization to implement screen-space reflection and other effects. [28] :13

A technique called photon mapping traces paths of photons from a light source to an object, accumulating data about irradiance which is then used during conventional ray tracing or path tracing. [6] :1037-1039 Rendering a scene using only rays traced from the light source to the camera is impractical, even though it corresponds more closely to reality, because a huge number of photons would need to be simulated, only a tiny fraction of which actually hit the camera. [32] :7-9 [27] :587

Some authors call conventional ray tracing "backward" ray tracing because it traces the paths of photons backwards from the camera to the light source, and call following paths from the light source (as in photon mapping) "forward" ray tracing. [32] :7-9 However sometimes the meaning of these terms is reversed. [33] Tracing rays starting at the light source can also be called particle tracing or light tracing, which avoids this ambiguity. [34] :92 [35] :4.5.4

Real-time rendering, including video game graphics, typically uses rasterization, but increasingly combines it with ray tracing and path tracing. [29] :2 To enable realistic global illumination, real-time rendering often relies on pre-rendered ("baked") lighting for stationary objects. For moving objects, it may use a technique called light probes, in which lighting is recorded by rendering omnidirectional views of the scene at chosen points in space (often points on a grid to allow easier interpolation). These are similar to environment maps, but typically use a very low resolution or an approximation such as spherical harmonics. [36] (Note: Blender uses the term 'light probes' for a more general class of pre-recorded lighting data, including reflection maps. [37] )

Rasterization

An architectural visualization of the Extremely Large Telescope from 2009, likely rendered using a combination of techniques Latest Rendering of the E-ELT.jpg
An architectural visualization of the Extremely Large Telescope from 2009, likely rendered using a combination of techniques

The term rasterization (in a broad sense) encompasses many techniques used for 2D rendering and real-time 3D rendering. 3D animated films were rendered by rasterization before ray tracing and path tracing became practical.

A renderer combines rasterization with geometry processing (which is not specific to rasterization) and pixel processing which computes the RGB color values to be placed in the framebuffer for display. [16] :2.1 [26] :9

The main tasks of rasterization (including pixel processing) are: [16] :2, 3.8, 23.1.1

3D rasterization is typically part of a graphics pipeline in which an application provides lists of triangles to be rendered, and the rendering system transforms and projects their coordinates, determines which triangles are potentially visible in the viewport , and performs the above rasterization and pixel processing tasks before displaying the final result on the screen. [16] :2.1 [26] :9

Historically, 3D rasterization used algorithms like the Warnock algorithm and scanline rendering (also called "scan-conversion"), which can handle arbitrary polygons and can rasterize many shapes simultaneously. Although such algorithms are still important for 2D rendering, 3D rendering now usually divides shapes into triangles and rasterizes them individually using simpler methods. [38] [39] [27] :456,561–569

High-performance algorithms exist for rasterizing 2D lines, including anti-aliased lines, as well as ellipses and filled triangles. An important special case of 2D rasterization is text rendering, which requires careful anti-aliasing and rounding of coordinates to avoid distorting the letterforms and preserve spacing, density, and sharpness. [26] :9.1.1 [40]

After 3D coordinates have been projected onto the image plane, rasterization is primarily a 2D problem, but the 3rd dimension necessitates hidden surface removal . Early computer graphics used geometric algorithms or ray casting to remove the hidden portions of shapes, or used the painter's algorithm , which sorts shapes by depth (distance from camera) and renders them from back to front. Depth sorting was later avoided by incorporating depth comparison into the scanline rendering algorithm. The z-buffer algorithm performs the comparisons indirectly by including a depth or "z" value in the framebuffer. A pixel is only covered by a shape if that shape's z value is lower (indicating closer to the camera) than the z value currently in the buffer. The z-buffer requires additional memory (an expensive resource at the time it was invented) but simplifies the rasterization code and permits multiple passes. Memory is now faster and more plentiful, and a z-buffer is almost always used for real-time rendering. [41] [42] [27] :553–570 [16] :2.5.2

A drawback of the basic z-buffer algorithm is that each pixel ends up either entirely covered by a single object or filled with the background color, causing jagged edges in the final image. Early anti-aliasing approaches addressed this by detecting when a pixel is partially covered by a shape, and calculating the covered area. The A-buffer (and other sub-pixel and multi-sampling techniques) solve the problem less precisely but with higher performance. For real-time 3D graphics, it has become common to use complicated heuristics (and even neural-networks) to perform anti-aliasing. [42] [43] [26] :9.3 [16] :5.4.2

In 3D rasterization, color is usually determined by a pixel shader or fragment shader, a small program that is run for each pixel. The shader does not (or cannot) directly access 3D data for the entire scene (this would be very slow, and would result in an algorithm similar to ray tracing) and a variety of techniques have been developed to render effects like shadows and reflections using only texture mapping and multiple passes. [26] :17.8

Older and more basic 3D rasterization implementations did not support shaders, and used simple shading techniques such as flat shading (lighting is computed once for each triangle, which is then rendered entirely in one color), Gouraud shading (lighting is computed using normal vectors defined at vertices and then colors are interpolated across each triangle), or Phong shading (normal vectors are interpolated across each triangle and lighting is computed for each pixel). [26] :9.2

Until relatively recently, Pixar used rasterization for rendering its animated films. Unlike the renderers commonly used for real-time graphics, the Reyes rendering system in Pixar's RenderMan software was optimized for rendering very small (pixel-sized) polygons, and incorporated stochastic sampling techniques more typically associated with ray tracing. [9] :2, 6.3 [44]

Ray casting

One of the simplest ways to render a 3D scene is to test if a ray starting at the viewpoint (the "eye" or "camera") intersects any of the geometric shapes in the scene, repeating this test using a different ray direction for each pixel. This method, called ray casting, was important in early computer graphics, and is a fundamental building block for more advanced algorithms. Ray casting can be used to render shapes defined by constructive solid geometry (CSG) operations. [28] :8-9 [45] :246–249

Early ray casting experiments include the work of Arthur Appel in the 1960s. Appel rendered shadows by casting an additional ray from each visible surface point towards a light source. He also tried rendering the density of illumination by casting random rays from the light source towards the object and plotting the intersection points (similar to the later technique called photon mapping ). [46]

Ray marching can be used to find the first intersection of a ray with an intricate shape such as this Mandelbulb fractal. Mandelbulb p8a.jpg
Ray marching can be used to find the first intersection of a ray with an intricate shape such as this Mandelbulb fractal.

When rendering scenes containing many objects, testing the intersection of a ray with every object becomes very expensive. Special data structures are used to speed up this process by allowing large numbers of objects to be excluded quickly (such as objects behind the camera). These structures are analogous to database indexes for finding the relevant objects. The most common are the bounding volume hierarchy (BVH), which stores a pre-computed bounding box or sphere for each branch of a tree of objects, and the k-d tree which recursively divides space into two parts. Recent GPUs include hardware acceleration for BVH intersection tests. K-d trees are a special case of binary space partitioning , which was frequently used in early computer graphics (it can also generate a rasterization order for the painter's algorithm). Octrees , another historically popular technique, are still often used for volumetric data. [29] :16–17 [47] [45] [12] :36.2

Geometric formulas are sufficient for finding the intersection of a ray with shapes like spheres, polygons, and polyhedra, but for most curved surfaces there is no analytic solution, or the intersection is difficult to compute accurately using limited precision floating point numbers. Root-finding algorithms such as Newton's method can sometimes be used. To avoid these complications, curved surfaces are often approximated as meshes of triangles. Volume rendering (e.g. rendering clouds and smoke), and some surfaces such as fractals, may require ray marching instead of basic ray casting. [48] [28] :13 [16] :14, 17.3

Ray tracing

Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language SpiralSphereAndJuliaDetail1.jpg
Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language

Ray casting can be used to render an image by tracing light rays backwards from a simulated camera. After finding a point on a surface where a ray originated, another ray is traced towards the light source to determine if anything is casting a shadow on that point. If not, a reflectance model (such as Lambertian reflectance for matte surfaces, or the Phong reflection model for glossy surfaces) is used to compute the probability that a photon arriving from the light would be reflected towards the camera, and this is multiplied by the brightness of the light to determine the pixel brightness. If there are multiple light sources, brightness contributions of the lights are added together. For color images, calculations are repeated for multiple wavelengths of light (e.g. red, green, and blue). [16] :11.2.2 [28] :8

Classical ray tracing (also called Whitted-style or recursive ray tracing) extends this method so it can render mirrors and transparent objects. If a ray traced backwards from the camera originates at a point on a mirror, the reflection formula from geometric optics is used to calculate the direction the reflected ray came from, and another ray is cast backwards in that direction. If a ray originates at a transparent surface, rays are cast backwards for both reflected and refracted rays (using Snell's law to compute the refracted direction), and so ray tracing needs to support a branching "tree" of rays. In simple implementations, a recursive function is called to trace each ray. [16] :11.2.2 [28] :9

Ray tracing usually performs anti-aliasing by taking the average of multiple samples for each pixel. It may also use multiple samples for effects like depth of field and motion blur. If evenly-spaced ray directions or times are used for each of these features, many rays are required, and some aliasing will remain. Cook-style, stochastic, or Monte Carlo ray tracing avoids this problem by using random sampling instead of evenly-spaced samples. This type of ray tracing is commonly called distributed ray tracing, or distribution ray tracing because it samples rays from probability distributions. Distribution ray tracing can also render realistic "soft" shadows from large lights by using a random sample of points on the light when testing for shadowing, and it can simulate chromatic aberration by sampling multiple wavelengths from the spectrum of light. [28] :10 [32] :25

Real surface materials reflect small amounts of light in almost every direction because they have small (or microscopic) bumps and grooves. A distribution ray tracer can simulate this by sampling possible ray directions, which allows rendering blurry reflections from glossy and metallic surfaces. However if this procedure is repeated recursively to simulate realistic indirect lighting, and if more than one sample is taken at each surface point, the tree of rays quickly becomes huge. Another kind of ray tracing, called path tracing, handles indirect light more efficiently, avoiding branching, and ensures that the distribution of all possible paths from a light source to the camera is sampled in an unbiased way. [32] :25–27 [31]

Ray tracing was often used for rendering reflections in animated films, until path tracing became standard for film rendering. Films such as Shrek 2 and Monsters University also used distribution ray tracing or path tracing to precompute indirect illumination for a scene or frame prior to rendering it using rasterization. [49] :118–121

Advances in GPU technology have made real-time ray tracing possible in games, although it is currently almost always used in combination with rasterization. [29] :2 This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects, [50] :305 and shadows that are accurate over a wide range of distances and surface orientations. [51] :159-160 Ray tracing support is included in recent versions of the graphics APIs used by games, such as DirectX, Metal, and Vulkan. [52]

Ray tracing has been used to render simulated black holes, and the appearance of objects moving at close to the speed of light, by taking spacetime curvature and relativistic effects into account during light ray simulation. [53] [54]

Radiosity

Classical radiosity demonstration. Surfaces are divided into 16x16 or 16x32 meshes. Top: direct light only. Bottom: radiosity solution (for albedo 0.85). Classical radiosity example, simple scene, no interpolation, direct only and full.png
Classical radiosity demonstration. Surfaces are divided into 16x16 or 16x32 meshes. Top: direct light only. Bottom: radiosity solution (for albedo 0.85).
Top: the same scene with a finer radiosity mesh, smoothing the patches during final rendering using bilinear interpolation. Bottom: the scene rendered with path tracing (using the PBRT renderer). Classical radiosity comparison with path tracing, simple scene, interpolated.png
Top: the same scene with a finer radiosity mesh, smoothing the patches during final rendering using bilinear interpolation. Bottom: the scene rendered with path tracing (using the PBRT renderer).

Radiosity (named after the radiometric quantity of the same name) is a method for rendering objects illuminated by light bouncing off rough or matte surfaces. This type of illumination is called indirect light, environment lighting, or diffuse lighting, and the problem of rendering it realistically is called global illumination. Rasterization and basic forms of ray tracing (other than distribution ray tracing and path tracing) can only roughly approximate indirect light, e.g. by adding a uniform "ambient" lighting amount chosen by the artist. Radiosity techniques are also suited to rendering scenes with area lights such as rectangular fluorescent lighting panels, which are difficult for rasterization and traditional ray tracing. Radiosity is considered a physically-based method, meaning that it aims to simulate the flow of light in an environment using equations and experimental data from physics, however it often assumes that all surfaces are opaque and perfectly Lambertian, which reduces realism and limits its applicability. [16] :10, 11.2.1 [6] :888, 893 [55]

In the original radiosity method (first proposed in 1984) now called classical radiosity, surfaces and lights in the scene are split into pieces called patches, a process called meshing (this step makes it a finite element method). The rendering code must then determine what fraction of the light being emitted or diffusely reflected (scattered) by each patch is received by each other patch. These fractions are called form factors or view factors (first used in engineering to model radiative heat transfer). The form factors are multiplied by the albedo of the receiving surface and put in a matrix. The lighting in the scene can then be expressed as a matrix equation (or equivalently a system of linear equations) that can be solved by methods from linear algebra. [55] [56] :46 [6] :888, 896

Solving the radiosity equation gives the total amount of light emitted and reflected by each patch, which is divided by area to get a value called radiosity that can be used when rasterizing or ray tracing to determine the color of pixels corresponding to visible parts of the patch. For real-time rendering, this value (or more commonly the irradiance, which does not depend on local surface albedo) can be pre-computed and stored in a texture (called an irradiance map) or stored as vertex data for 3D models. This feature was used in architectural visualization software to allow real-time walk-throughs of a building interior after computing the lighting. [6] :890 [16] :11.5.1 [57] :332

The large size of the matrices used in classical radiosity (the square of the number of patches) causes problems for realistic scenes. Practical implementations may use Jacobi or Gauss-Seidel iterations, which is equivalent (at least in the Jacobi case) to simulating the propagation of light one bounce at a time until the amount of light remaining (not yet absorbed by surfaces) is insignificant. The number of iterations (bounces) required is dependent on the scene, not the number of patches, so the total work is proportional to the square of the number of patches (in contrast, solving the matrix equation using Gaussian elimination requires work proportional to the cube of the number of patches). Form factors may be recomputed when they are needed, to avoid storing a complete matrix in memory. [6] :901,907

The quality of rendering is often determined by the size of the patches, e.g. very fine meshes are needed to depict the edges of shadows accurately. An important improvement is hierarchical radiosity, which uses a coarser mesh (larger patches) for simulating the transfer of light between surfaces that are far away from one another, and adaptively sub-divides the patches as needed. This allows radiosity to be used for much larger and more complex scenes. [6] :975,939

Alternative and extended versions of the radiosity method support non-Lambertian surfaces, such as glossy surfaces and mirrors, and sometimes use volumes or "clusters" of objects as well as surface patches. Stochastic or Monte Carlo radiosity uses random sampling in various ways, e.g. taking samples of incident light instead of integrating over all patches, which can improve performance but adds noise (this noise can be reduced by using deterministic iterations as a final step, unlike path tracing noise). Simplified and partially precomputed versions of radiosity are widely used for real-time rendering, combined with techniques such as octree radiosity that store approximations of the light field. [6] :979,982 [56] :49 [58] [16] :11.5

Path tracing

As part of the approach known as physically based rendering , path tracing has become the dominant technique for rendering realistic scenes, including effects for movies. [59] For example, the popular open source 3D software Blender uses path tracing in its Cycles renderer. [60] Images produced using path tracing for global illumination are generally noisier than when using radiosity (the main competing algorithm for realistic lighting), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a tessellated representation of irradiance. [59] [6] :975-976, 1045

Like distributed ray tracing , path tracing is a kind of stochastic or randomized ray tracing that uses Monte Carlo or Quasi-Monte Carlo integration. It was proposed and named in 1986 by Jim Kajiya in the same paper as the rendering equation. Kajiya observed that much of the complexity of distributed ray tracing could be avoided by only tracing a single path from the camera at a time (in Kajiya's implementation, this "no branching" rule was broken by tracing additional rays from each surface intersection point to randomly chosen points on each light source). Kajiya suggested reducing the noise present in the output images by using stratified sampling and importance sampling for making random decisions such as choosing which ray to follow at each step of a path. Even with these techniques, path tracing would not have been practical for film rendering, using computers available at the time, because the computational cost of generating enough samples to reduce variance to an acceptable level was too high. Monster House, the first feature film rendered entirely using path tracing, was not released until 20 years later. [31] [59] [61]

In its basic form, path tracing is inefficient (requiring too many samples) for rendering caustics and scenes where light enters indirectly through narrow spaces. Attempts were made to address these weaknesses in the 1990s. Bidirectional path tracing has similarities to photon mapping, tracing rays from the light source and the camera separately, and then finding ways to connect these paths (but unlike photon mapping it usually samples new light paths for each pixel rather than using the same cached data for all pixels). Metropolis light transport samples paths by modifying paths that were previously traced, spending more time exploring paths that are similar to other "bright" paths, which increases the chance of discovering even brighter paths. Multiple importance sampling provides a way to reduce variance when combining samples from more than one sampling method, particularly when some samples are much noisier than the others. [59] [34]

This later work was summarized and expanded upon in Eric Veach's 1997 PhD thesis, which helped raise interest in path tracing in the computer graphics community. The Arnold renderer, first released in 1998, proved that path tracing was practical for rendering frames for films, and that there was a demand for unbiased and physically based rendering in the film industry; other commercial and open source path tracing renderers began appearing. Computational cost was addressed by rapid advances in CPU and cluster performance. [59]

Path tracing's relative simplicity and its nature as a Monte Carlo method (sampling hundreds or thousands of paths per pixel) have made it attractive to implement on a GPU, especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's RTX and OptiX. [62] However bidirectional path tracing and Metropolis light transport are more difficult to implement efficiently on a GPU. [63] [64]

Research into improving path tracing continues. Recent path guiding approaches construct approximations of the light field probability distribution in each volume of space, so paths can be sampled more effectively. [65] Many techniques have been developed to denoise the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small-scale artifacts that are more objectionable than noise; [66] [67] neural networks are now widely used for this purpose. [68] [69] [70]

Neural rendering

Neural rendering is a rendering method using artificial neural networks. [71] [72] Neural rendering includes image-based rendering methods that are used to reconstruct 3D models from 2-dimensional images. [71] One of these methods are photogrammetry, which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably Nvidia, Google and various other companies.

Scientific and mathematical basis

The implementation of a realistic renderer always has some basic element of physical simulation or emulation  some computation which resembles or abstracts a real physical process.

The term " physically based " indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.

The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.

Rendering research is concerned with both the adaptation of scientific models and their efficient application.

Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods.

The rendering equation

This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.

Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport'  all the movement of light  in a scene.

The bidirectional reflectance distribution function

The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:

Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.

Geometric optics

Rendering is practically exclusively concerned with the particle aspect of light physics  known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.

Visual perception

Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays  movie screen, computer monitor, etc.  cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties will not be noticeable. This related subject is tone mapping.

Sampling and filtering

One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.

If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.

Hardware

Rendering is usually limited by available computing power and memory bandwidth, and so specialized hardware has been developed to speed it up ("accelerate" it), particularly for real-time rendering. Hardware features such as a framebuffer for raster graphics are required to display the output of rendering smoothly in real time.

History

In the era of vector monitors (also called calligraphic displays), a display processing unit (DPU) was a dedicated CPU or coprocessor that maintained a list of visual elements and redrew them continuously on the screen by controlling an electron beam. Advanced DPUs such as Evans & Sutherland's Line Drawing System-1 (and later models produced into the 1980s) incorporated 3D coordinate transformation features to accelerate rendering of wire-frame images. [27] :93–94,404–421 [73] Evans & Sutherland also made the Digistar planetarium projection system, which was a vector display that could render both stars and wire-frame graphics (the vector-based Digistar and Digistar II were used in many planetariums, and a few may still be in operation). [74] [75] [76] A Digistar prototype was used for rendering 3D star fields for the film Star Trek II: The Wrath of Khan some of the first 3D computer graphics sequences ever seen in a feature film. [77]

Shaded 3D graphics rendering in the 1970s and early 1980s was usually implemented on general-purpose computers, such as the PDP-10 used by researchers at the University of Utah [78] [42] . It was difficult to speed up using specialized hardware because it involves a pipeline of complex steps, requiring data addressing, decision-making, and computation capabilities typically only provided by CPUs (although dedicated circuits for speeding up particular operations were proposed [78] ). Supercomputers or specially designed multi-CPU computers or clusters were sometimes used for ray tracing. [45] In 1981, James H. Clark and Marc Hannah designed the Geometry Engine, a VLSI chip for performing some of the steps of the 3D rasterization pipeline, and started the company Silicon Graphics (SGI) to commercialize this technology. [79] [80]

Home computers and game consoles in the 1980s contained graphics coprocessors that were capable of scrolling and filling areas of the display, and drawing sprites and lines, though they were not useful for rendering realistic images. [81] [82] Towards the end of the 1980s PC graphics cards and arcade games with 3D rendering acceleration began to appear, and in the 1990s such technology became commonplace. Today, even low-power mobile processors typically incorporate 3D graphics acceleration features. [79] [83]

GPUs

The 3D graphics accelerators of the 1990s evolved into modern GPUs. GPUs are general-purpose processors, like CPUs, but they are designed for tasks that can be broken into many small, similar, mostly independent sub-tasks (such as rendering individual pixels) and performed in parallel. This means that a GPU can speed up any rendering algorithm that can be split into subtasks in this way, in contrast to 1990s 3D accelerators which were only designed to speed up specific rasterization algorithms and simple shading and lighting effects (although tricks could be used to perform more general computations). [16] :ch3 [84]

Due to their origins, GPUs typically still provide specialized hardware acceleration for some steps of a traditional 3D rasterization pipeline, including hidden surface removal using a z-buffer, and texture mapping with mipmaps, but these features are no longer always used. [16] :ch3 Recent GPUs have features to accelerate finding the intersections of rays with a bounding volume hierarchy, to help speed up all variants of ray tracing and path tracing, [47] as well as neural network acceleration features sometimes useful for rendering. [85]

GPUs are usually integrated with high-bandwidth memory systems to support the read and write bandwidth requirements of high-resolution, real-time rendering, particularly when multiple passes are required to render a frame, however memory latency may be higher than on a CPU, which can be a problem if the critical path in an algorithm involves many memory accesses. GPU design accepts high latency as inevitable (in part because a large number of threads are sharing the memory bus) and attempts to "hide" it by efficiently switching between threads, so a different thread can be performing computations while the first thread is waiting for a read or write to complete. [16] :ch3 [86] [87]

Rendering algorithms will run efficiently on a GPU only if they can be implemented using small groups of threads that perform mostly the same operations. As an example of code that meets this requirement: when rendering a small square of pixels in a simple ray-traced image, all threads will likely be intersecting rays with the same object and performing the same lighting computations. For performance and architectural reasons, GPUs run groups of around 16-64 threads called warps or wavefronts in lock-step (all threads in the group are executing the same instructions at the same time). If not all threads in the group need to run particular blocks of code (due to conditions) then some threads will be idle, or the results of their computations will be discarded, causing degraded performance. [16] :ch3 [87]

Chronology of algorithms and techniques

The following is a rough timeline of frequently mentioned rendering techniques, including areas of current research. Note that even in cases where an idea was named in a specific paper, there were almost always multiple researchers or teams working in the same area (including earlier related work). When a method is first proposed it is often very inefficient, and it takes additional research and practical efforts to turn it into a useful technique. [6] :887

The list focuses on academic research and does not include hardware. (For more history see #External links, as well as Computer graphics#History and Golden_age_of_arcade_video_games#Technology.)

See also

Related Research Articles

<span class="mw-page-title-main">Global illumination</span> Group of rendering algorithms used in 3D computer graphics

Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.

<span class="mw-page-title-main">Radiosity (computer graphics)</span> Computer graphics rendering method using diffuse reflection

In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.

<span class="mw-page-title-main">Ray tracing (graphics)</span> Rendering method

In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.

<span class="mw-page-title-main">Scanline rendering</span> 3D computer graphics image rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

<span class="mw-page-title-main">Rasterisation</span> Conversion of a vector-graphics image to a raster image

In computer graphics, rasterisation or rasterization is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or to the conversion of 2D rendering primitives, such as polygons and line segments, into a rasterized format.

In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Hidden-surface determination</span> Visibility in 3D computer graphics

In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

<span class="mw-page-title-main">Ambient occlusion</span> Computer graphics shading and rendering technique

In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations.

<span class="mw-page-title-main">Path tracing</span> Computer graphics method

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Volume ray casting, sometimes called volumetric ray casting, volumetric ray tracing, or volume ray marching, is an image-based volume rendering technique. It computes 2D images from 3D volumetric data sets. Volume ray casting, which processes volume data, must not be mistaken with ray casting in the sense used in ray tracing, which processes surface data. In the volumetric variant, the computation doesn't stop at the surface but "pushes through" the object, sampling the object along the ray. Unlike ray tracing, volume ray casting does not spawn secondary rays. When the context/application is clear, some authors simply call it ray casting. Because ray marching does not necessarily require an exact solution to ray intersection and collisions, it is suitable for real time computing for many applications for which ray tracing is unsuitable.

<span class="mw-page-title-main">3D rendering</span> Process of converting 3D scenes into 2D images

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

<span class="mw-page-title-main">OptiX</span> Nvidia ray tracing API using CUDA to compute on GPUs

Nvidia OptiX is a ray tracing API that was first developed around 2009. The computations are offloaded to the GPUs through either the low-level or the high-level API introduced with CUDA. CUDA is only available for Nvidia's graphics products. Nvidia OptiX is part of Nvidia GameWorks. OptiX is a high-level, or "to-the-algorithm" API, meaning that it is designed to encapsulate the entire algorithm of which ray tracing is a part, not just the ray tracing itself. This is meant to allow the OptiX engine to execute the larger algorithm with great flexibility without application-side changes.

This is a glossary of terms relating to computer graphics.

References

  1. "Rendering, N., Sense IV.9.a". Oxford English Dictionary. March 2024. doi:10.1093/OED/1142023199.
  2. "Render, V., Sense I.3.b". Oxford English Dictionary. June 2024. doi:10.1093/OED/1095944705.
  3. "Rendering, N., Sense III.5.a". Oxford English Dictionary. March 2024. doi:10.1093/OED/1143106586.
  4. "Render, V., Sense IV.22.a". Oxford English Dictionary. June 2024. doi:10.1093/OED/1039673413.
  5. "What is a Rendering Engine? | Dictionary". Archived from the original on 2024-02-21. Retrieved 2024-02-21.
  6. 1 2 3 4 5 6 7 8 9 10 11 12 Glassner, Andrew S. (2011) [1995]. Principles of digital image synthesis (PDF). 1.0.1. Morgan Kaufmann Publishers, Inc. ISBN   978-1-55860-276-2. Archived (PDF) from the original on 2024-01-27. Retrieved 2024-01-27.
  7. Rombach, Robin; Blattmann, Andreas; Lorenz, Dominik; Esser; Ommer, Björn (June 2022). High-Resolution Image Synthesis with Latent Diffusion Models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10674–10685. doi:10.1109/CVPR52688.2022.01042.
  8. Tewari, A.; Fried, O.; Thies, J.; Sitzmann, V.; Lombardi, S.; Sunkavalli, K.; Martin-Brualla, R.; Simon, T.; Saragih, J.; Nießner, M.; Pandey, R.; Fanello, S.; Wetzstein, G.; Zhu, J.-Y.; Theobalt, C.; Agrawala, M.; Shechtman, E.; Goldman, D.B.; Zollhöfer, M. (May 2020). "State of the Art on Neural Rendering". ACM Transactions on Graphics. 39 (2): 701–727. arXiv: 2004.03805 . doi:10.1111/cgf.14022.
  9. 1 2 3 4 5 6 Raghavachary, Saty (2005). Rendering for Beginners. Focal Press. ISBN   0-240-51935-3.
  10. 1 2 Adobe Systems Incorporated (1990). PostScript Language Reference Manual (2nd ed.). Addison-Wesley Publishing Company. ISBN   0-201-18127-4.
  11. 1 2 "SVG: Scalable Vector Graphics". Mozilla Corporation. 7 August 2024. Archived from the original on 24 August 2024. Retrieved 31 August 2024.
  12. 1 2 Hughes, John F.; Van Dam, Andries; McGuire, Morgan; Sklar, David F.; Foley, James D.; Feiner, Steven K.; Akeley, Kurt (2014). Computer graphics : principles and practice (3rd ed.). Addison-Wesley. ISBN   978-0-321-39952-6.
  13. 1 2 "Blender 4.2 Manual: Importing & Exporting Files". docs.blender.org. The Blender Foundation. Archived from the original on 31 August 2024. Retrieved 31 August 2024.
  14. Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (2023). "pbrt-v4 Input File Format" . Retrieved 31 August 2024.
  15. 1 2 Dunlop, Renee (2014). Production Pipeline Fundamentals for Film and Games. Focal Press. ISBN   978-1-315-85827-2.
  16. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty; Pesce, Angelo; Iwanicki, Michał; Hillaire, Sébastien (2018). Real-Time Rendering (4th ed.). Boca Raton, FL: A K Peters/CRC Press. ISBN   978-1138627000.
  17. "About OpenVDB". www.openvdb.org. Academy Software Foundation. Archived from the original on 3 September 2024. Retrieved 31 August 2024.
  18. Museth, Ken (June 2013). "VDB: High-Resolution Sparse Volumes with Dynamic Topology" (PDF). ACM Transactions on Graphics. 32 (3). doi:10.1145/2487228.2487235. Archived (PDF) from the original on 15 April 2024. Retrieved 31 August 2024.
  19. Bridson, Robert (2015). Fluid Simulation for Computer Graphics (2nd ed.). A K Peters/CRC Press. ISBN   978-1-482-23283-7.
  20. Schmid, Katrin (March 2, 2023). "A short 170 year history of Neural Radiance Fields (NeRF), Holograms, and Light Fields". radiancefields.com. Archived from the original on 31 August 2024. Retrieved 31 August 2024.
  21. 1 2 Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020). "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" . Retrieved 31 August 2024.
  22. 1 2 Kerbl, Bernhard; Kopanas, Georgios; Leimkühler, Thomas; Drettakis, George (July 2023). "3D Gaussian Splatting for Real-Time Radiance Field Rendering". ACM Transactions on Graphics. 42 (4): 1–14. arXiv: 2308.04079 . doi:10.1145/3592433. Archived from the original on 22 August 2024. Retrieved 31 August 2024.
  23. 1 2 Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (2023). "pbrt-v4 User's Guide". Archived from the original on 3 September 2024. Retrieved 31 August 2024.
  24. 1 2 Brinkmann, Ron (2008). The Art and Science of Digital Compositing (2nd ed.). Morgan Kaufmann. ISBN   978-0-12-370638-6.
  25. "Blender 4.2 Manual: Rendering: Render Output: Rendering Animations". docs.blender.org. The Blender Foundation. Archived from the original on 31 August 2024. Retrieved 31 August 2024.
  26. 1 2 3 4 5 6 7 8 Marschner, Steve; Shirley, Peter (2022). Fundamentals of Computer Graphics (5th ed.). CRC Press. ISBN   978-1-003-05033-9.
  27. 1 2 3 4 5 6 Foley, James D.; Van Dam, Andries (1982). Fundamentals of Interactive Computer Graphics. Addison-Wesley Publishing Company, Inc. ISBN   0-201-14468-9.
  28. 1 2 3 4 5 6 7 8 9 10 Haines, Eric; Shirley, Peter (February 25, 2019). "1. Ray Tracing Terminology". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN   978-1-4842-4427-2. S2CID   71144394. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  29. 1 2 3 4 5 Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty; Pesce, Angelo; Iwanicki, Michał; Hillaire, Sébastien (August 6, 2018). "Online chapter 26. Real-Time Ray Tracing" (PDF). Real-Time Rendering (4th ed.). Boca Raton, FL: A K Peters/CRC Press. ISBN   978-1138627000. Archived (PDF) from the original on January 27, 2024. Retrieved January 27, 2024.
  30. Cook, Robert L. (April 11, 2019) [1989]. "5. Stochastic Sampling and Distributed Ray Tracing". In Glassner, Andrew S. (ed.). An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN   978-0-12-286160-4. Archived (PDF) from the original on January 27, 2024. Retrieved January 27, 2024.
  31. 1 2 3 4 5 Kajiya, James T. (August 1986). "The rendering equation". ACM SIGGRAPH Computer Graphics. 20 (4): 143–150. doi:10.1145/15886.15902. Archived from the original on 3 September 2024. Retrieved 27 January 2024.
  32. 1 2 3 4 Glassner, Andrew S. (April 11, 2019) [1989]. "1. An Overview of Ray Tracing". An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN   978-0-12-286160-4. Archived (PDF) from the original on January 27, 2024. Retrieved January 27, 2024.
  33. Arvo, James (August 1986). Backward ray tracing (course notes) (PDF). SIGGRAPH 1986 Developments in Ray Tracing. Vol. 12. CiteSeerX   10.1.1.31.581 . Retrieved 5 October 2024.
  34. 1 2 Veach, Eric (1997). Robust Monte Carlo methods for light transport simulation (PDF) (PhD thesis). Stanford University.
  35. Dutré, Philip; Bala, Kavita; Bekaert, Philippe (2015). Advanced Global Illumination (2nd ed.). A K Peters/CRC Press. ISBN   978-1-4987-8562-4.
  36. "Unity Manual:Light Probes: Introduction". docs.unity3d.com. Archived from the original on 3 September 2024. Retrieved 27 January 2024.
  37. "Blender Manual: Rendering: EEVEE: Light Probes: Introduction". docs.blender.org. The Blender Foundation. Archived from the original on 24 March 2024. Retrieved 27 January 2024.
  38. 1 2 Warnock, John (June 1969), A hidden surface algorithm for computer generated halftone pictures, University of Utah, TR 69-249, retrieved 19 September 2024
  39. 1 2 Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM. 13 (9): 527–536. doi: 10.1145/362736.362739 . S2CID   15941472.
  40. Stamm, Beat (21 June 2018). "The Raster Tragedy at Low-Resolution Revisited: Opportunities and Challenges beyond "Delta-Hinting"". rastertragedy.com. Retrieved 19 September 2024.
  41. 1 2 Watkins, Gary Scott (June 1970), A Real Time Visible Surface Algorithm, University of Utah, retrieved 19 September 2024
  42. 1 2 3 4 5 Catmull, Edwin (December 1974). A Subdivision Algorithm for Computer Display of Curved Surfaces (PDF) (PhD thesis). University of Utah. Retrieved 19 September 2024.
  43. 1 2 Carpenter, Loren (July 1984). "The A-buffer, an antialiased hidden surface method". Computer Graphics. 18 (3): 103–108. doi:10.1145/964965.808585.
  44. 1 2 Cook, Robert L.; Carpenter, Loren; Catmull, Edwin (July 1987). "The Reyes image rendering architecture" (PDF). ACM SIGGRAPH Computer Graphics. 21 (4). Association for Computing Machinery: 95–102. doi:10.1145/37402.37414. ISSN   0097-8930. Archived (PDF) from the original on 2011-07-15. Retrieved 19 September 2024.
  45. 1 2 3 Arvo, James; Kirk, David (April 11, 2019) [1989]. "6. A Survey of Ray Tracing Acceleration Techniques". In Glassner, Andrew S. (ed.). An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN   978-0-12-286160-4 . Retrieved 13 September 2024.
  46. 1 2 Appel, A. (1968). "Some techniques for shading machine renderings of solids" (PDF). Proceedings of the Spring Joint Computer Conference. Vol. 32. pp. 37–49. Archived (PDF) from the original on 2012-03-13. Retrieved 19 September 2024.
  47. 1 2 Stich, Martin (February 25, 2019). "Foreword". In Haines, Eric; Akenine-Möller, Tomas (eds.). Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN   978-1-4842-4427-2. S2CID   71144394 . Retrieved 13 September 2024.
  48. Hanrahan, Pat (April 11, 2019) [1989]. "2. A Survey of Ray-Surface Intersection Algorithms". In Glassner, Andrew S. (ed.). An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN   978-0-12-286160-4. Archived (PDF) from the original on January 27, 2024. Retrieved 22 September 2024.
  49. Christensen, Per H.; Jarosz, Wojciech (27 October 2016). "The Path to Path-Traced Movies" (PDF). Foundations and Trends in Computer Graphics and Vision. 10 (2): 103–175. arXiv: 1611.02145 . doi:10.1561/0600000073 . Retrieved 26 October 2024.
  50. Liu, Edward; Llamas, Ignacio; Cañada, Juan; Kelly, Patrick (February 25, 2019). "19: Cinematic Rendering in UE4 with Real-Time Ray Tracing and Denoising". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN   978-1-4842-4427-2. S2CID   71144394. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  51. Boksansky, Jakub; Wimmer, Michael; Bittner, Jiri (February 25, 2019). "13. Ray Traced Shadows: Maintaining Real-Time Frame Rates". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN   978-1-4842-4427-2. S2CID   71144394. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  52. "Khronos Blog: Ray Tracing In Vulkan". www.khronos.org. The Khronos® Group Inc. December 15, 2020. Retrieved 27 January 2024.
  53. Alain, Riazuelo (March 2019). "Seeing relativity-I: Ray tracing in a Schwarzschild metric to explore the maximal analytic extension of the metric and making a proper rendering of the stars". International Journal of Modern Physics D. 28 (2). arXiv: 1511.06025 . Bibcode:2019IJMPD..2850042R. doi:10.1142/S0218271819500421.
  54. Howard, Andrew; Dance, Sandy; Kitchen, Les (24 July 1995), Relativistic ray-tracing: simulating the visual appearance of rapidly moving objects, University of Melbourne, Department of Computer Science, retrieved 26 October 2024
  55. 1 2 Goral, Cindy M.; Torrance, Kenneth E.; Greenberg, Donald P.; Battaile, Bennett (July 1984). "Modeling the interaction of light between diffuse surfaces" (PDF). Proceedings of the 11th annual conference on Computer graphics and interactive techniques. Vol. 18. Association for Computing Machinery. pp. 213–222. doi:10.1145/800031.808601. ISBN   0-89791-138-5. ISSN   0097-8930 . Retrieved 8 October 2024.
  56. 1 2 Dutré, Philip (29 September 2003), Global Illumination Compendium: The Concise Guide to Global Illumination Algorithms , retrieved 6 October 2024
  57. Cohen, Michael F.; Wallace, John R. (1993). Radiosity and Realistic Image Synthesis. Academic Press. ISBN   0-12-178270-0.
  58. Bekaert, Philippe (1999). Hierarchical and stochastic algorithms for radiosity (Thesis). Department of Computer Science, KU Leuven.
  59. 1 2 3 4 5 Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "1.6". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN   978-0262048026. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  60. "Blender Manual: Rendering: Cycles: Introduction". docs.blender.org. The Blender Foundation. Archived from the original on 3 September 2024. Retrieved 27 January 2024.
  61. Kulla, Christopher (30 July 2017), Arnold at Sony Pictures Imageworks: From Monster House to Smurfs: The Lost Village (course slides) (PDF), SIGGRAPH, Los Angeles{{citation}}: CS1 maint: location missing publisher (link)
  62. Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "15. Wavefront Rendering on GPUs". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN   978-0262048026. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  63. Otte, Vilém (2015). Bi-directional Path Tracing on GPU (PDF) (Master thesis). Masaryk University, Brno.
  64. Schmidt, Martin; Lobachev, Oleg; Guthe, Michael (2016). "Coherent Metropolis Light Transport on the GPU using Speculative Mutations" (PDF). Journal of WSCG. 24 (1): 1–8. ISSN   1213-6972.
  65. Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "13. Further Reading: Path Guiding". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN   978-0262048026 . Retrieved 8 September 2024.
  66. Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "5. Further Reading: Denoising". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN   978-0262048026. Archived from the original on January 27, 2024. Retrieved January 27, 2024.
  67. "Blender Manual: Rendering: Cycles: Optimizing Renders: Reducing Noise". docs.blender.org. The Blender Foundation. Archived from the original on 27 January 2024. Retrieved 27 January 2024.
  68. "Blender Manual: Rendering: Cycles: Render Settings: Sampling". docs.blender.org. The Blender Foundation. Archived from the original on 27 January 2024. Retrieved 27 January 2024.
  69. "Intel® Open Image Denoise: High-Performance Denoising Library for Ray Tracing". www.openimagedenoise.org. Intel Corporation. Archived from the original on 6 January 2024. Retrieved 27 January 2024.
  70. "NVIDIA OptiX™ AI-Accelerated Denoiser". developer.nvidia.com. NVIDIA Corporation. Archived from the original on 18 January 2024. Retrieved 27 January 2024.
  71. 1 2 Tewari, A.; Fried, O.; Thies, J.; Sitzmann, V.; Lombardi, S.; Sunkavalli, K.; Martin-Brualla, R.; Simon, T.; Saragih, J.; Nießner, M.; Pandey, R.; Fanello, S.; Wetzstein, G.; Zhu, J.-Y.; Theobalt, C.; Agrawala, M.; Shechtman, E.; Goldman, D. B.; Zollhöfer, M. (2020). "State of the Art on Neural Rendering". Computer Graphics Forum. 39 (2): 701–727. arXiv: 2004.03805 . doi:10.1111/cgf.14022. S2CID   215416317.
  72. Knight, Will. "A New Trick Lets Artificial Intelligence See in 3D". Wired. ISSN   1059-1028. Archived from the original on 2022-02-07. Retrieved 2022-02-08.
  73. Evans & Sutherland Multi-Picture System (brochure), Evans & Sutherland Corporation., 1979
  74. "Nagoya City Science Museum - Exhibition Guide - Digistar II". www.ncsm.city.nagoya.jp. Nagoya City Science Museum. Retrieved 13 September 2024.
  75. "Evans_and_Sutherland Digistar-II". planetariums-database.org. Worldwide Planetariums Database. Retrieved 13 September 2024.
  76. "Listing of Planetariums using a Evans_and_Sutherland Digistar-II". planetariums-database.org. Worldwide Planetariums Database. Retrieved 13 September 2024.
  77. Smith, Alvy Ray (October 1982). "Special Effects for Star Trek II: The Genesis Demo" (PDF). American Cinematographer: 1038. Retrieved 13 September 2024.
  78. 1 2 Bùi, Tường-Phong (1973). Illumination for Computer-Generated Images (PDF) (PhD thesis). University of Utah.
  79. 1 2 Peddie, Jon (24 September 2020). "Famous Graphics Chips: Geometry Engine". www.computer.org. Institute of Electrical and Electronics Engineers (IEEE). Retrieved 13 September 2024.
  80. Clark, James H. (1980). "Structuring a VLSI System Architecture" (PDF). Lambda (2nd Quarter): 25–30.
  81. Fox, Charles (2024). "11. RETRO ARCHITECTURES: 16-Bit Computer Design with the Commodore Amiga: Understanding the Architecture". Computer Architecture. No Starch Press. ISBN   978-1-7185-0287-1.
  82. "NES Dev Wiki: PPU". www.nesdev.org. nesdev wiki. Retrieved 13 September 2024.
  83. Harold, David (11 August 2017). "PowerVR at 25: The story of a graphics revolution". blog.imaginationtech.com. Imagination Technologies Limited. Retrieved 13 September 2024.
  84. Peercy, Mark S.; Olano, Marc; Airey, John; Ungar, P. Jeffrey (2000). "Interactive multi-pass programmable shading" (PDF). Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. p. 425-432. doi:10.1145/344779.344976. ISBN   1-58113-208-5 . Retrieved 13 September 2024.
  85. "NVIDIA DLSS 3". nvidia.com. NVIDIA Corporation. Retrieved 13 September 2024.
  86. Lam, Chester (16 April 2021). "Measuring GPU Memory Latency". chipsandcheese.com. Chips and Cheese. Retrieved 13 September 2024.
  87. 1 2 Gong, Xun; Gong, Xiang; Yu, Leiming; Kaeli, David (March 2019). "HAWS: Accelerating GPU Wavefront Execution through Selective Out-of-order Execution". ACM Trans. Archit. Code Optim. 16 (2). Association for Computing Machinery. doi:10.1145/3291050 . Retrieved 15 September 2024.
  88. Torrance, K. E.; Sparrow, E. M. (September 1967). "Theory for Off-Specular Reflection from Roughened Surfaces" (PDF). Journal of the Optical Society of America. 57 (9): 1105–1114. doi:10.1364/JOSA.57.001105 . Retrieved 4 December 2024.
  89. Warnock, John (20 May 1968), A Hidden Line Algorithm For Halftone Picture Representation (PDF), University of Utah, TR 4-5, retrieved 19 September 2024
  90. Gouraud, H. (1971). "Continuous shading of curved surfaces" (PDF). IEEE Transactions on Computers. 20 (6): 623–629. doi:10.1109/t-c.1971.223313. S2CID   123827991. Archived from the original (PDF) on 2010-07-02.
  91. 1 2 "History | School of Computing". Archived from the original on 2013-12-03. Retrieved 2021-11-22.
  92. 1 2 Phong, B-T (1975). "Illumination for computer generated pictures" (PDF). Communications of the ACM. 18 (6): 311–316. CiteSeerX   10.1.1.330.4718 . doi:10.1145/360825.360839. S2CID   1439868. Archived from the original (PDF) on 2012-03-27.
  93. Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM. 19 (10): 542–546. CiteSeerX   10.1.1.87.8903 . doi:10.1145/360349.360353. S2CID   408793.
  94. Blinn, James F. (20 July 1977). "Models of light reflection for computer synthesized pictures". ACM SIGGRAPH Computer Graphics. 11 (2): 192–198. doi: 10.1145/965141.563893 via dl.acm.org.
  95. Crow, F.C. (1977). "Shadow algorithms for computer graphics" (PDF). Computer Graphics (Proceedings of SIGGRAPH 1977). Vol. 11. pp. 242–248. Archived from the original (PDF) on 2012-01-13. Retrieved 2011-07-15.
  96. Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978). Vol. 12. pp. 270–274. CiteSeerX   10.1.1.134.8225 .
  97. Blinn, J.F. (1978). Simulation of wrinkled surfaces (PDF). Computer Graphics (Proceedings of SIGGRAPH 1978). Vol. 12. pp. 286–292. Archived (PDF) from the original on 2012-01-21.
  98. Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980). Vol. 14. pp. 124–133. CiteSeerX   10.1.1.112.4406 .
  99. Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM. 23 (6): 343–349. CiteSeerX   10.1.1.114.7629 . doi:10.1145/358876.358882. S2CID   9524504.
  100. Cook, R.L.; Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981). Vol. 15. pp. 307–316. CiteSeerX   10.1.1.88.7796 .
  101. Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983). Vol. 17. pp. 1–11. CiteSeerX   10.1.1.163.6298 .
  102. Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications. 4 (10): 15–22. doi:10.1109/mcg.1984.6429331. S2CID   16965964.
  103. Porter, T.; Duff, T. (1984). Compositing digital images (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 253–259. Archived (PDF) from the original on 2015-02-16.
  104. Cook, R.L.; Porter, T.; Carpenter, L. (1984). Distributed ray tracing (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 137–145.[ permanent dead link ]
  105. Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 213–222. CiteSeerX   10.1.1.112.356 .
  106. Cohen, M.F.; Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 1985). Vol. 19. pp. 31–40. doi:10.1145/325165.325171. Archived from the original (PDF) on 2014-04-24. Retrieved 2020-03-25.
  107. Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX   10.1.1.31.581 .
  108. Wu, Xiaolin (July 1991). An efficient antialiasing technique. Vol. 25. pp. 143–152. doi:10.1145/127719.122734. ISBN   978-0-89791-436-9.{{cite book}}: |journal= ignored (help)
  109. Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN   978-0-12-064480-3.
  110. Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991). Vol. 25. pp. 197–206. CiteSeerX   10.1.1.93.5694 .
  111. M. Oren and S.K. Nayar, "Generalization of Lambert's Reflectance Model Archived 2010-02-15 at the Wayback Machine ". SIGGRAPH. pp.239-246, Jul, 1994
  112. Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images" (PDF). IEEE Computer Graphics & Applications. 13 (6): 42–48. doi:10.1109/38.252554. S2CID   6459836. Archived (PDF) from the original on 2011-12-08.
  113. Hanrahan, P.; Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993). Vol. 27. pp. 165–174. CiteSeerX   10.1.1.57.9761 .
  114. Lafortune, Eric; Willems, Yves (December 1993). "Bi-directional path tracing" (PDF). Proceedings of Third International Conference on Computational Graphics and Visualization Techniques (CompuGraphics). pp. 145–153. Archived (PDF) from the original on 21 May 2022. Retrieved 2 September 2024.
  115. Miller, Gavin (24 July 1994). "Efficient algorithms for local and global accessibility shading". Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94. ACM. pp. 319–326. doi:10.1145/192161.192244. ISBN   978-0897916677. S2CID   15271113. Archived from the original on 22 November 2021. Retrieved 7 May 2018 via dl.acm.org.
  116. Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics. 19 (2): 215–224. CiteSeerX   10.1.1.97.2724 . doi:10.1016/0097-8493(94)00145-o.
  117. Veach, Eric; Guibas, Leonidas J. (15 September 1995). "Optimally combining sampling techniques for Monte Carlo rendering". SIGGRAPH95: 22nd International ACM Conference on Computer Graphics and Interactive Techniques. pp. 419–428. doi:10.1145/218380.218498. Archived from the original on 26 July 2024. Retrieved 2 September 2024.
  118. Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997). Vol. 16. pp. 65–76. CiteSeerX   10.1.1.88.944 .
  119. Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997). Vol. 16. pp. 65–76. CiteSeerX   10.1.1.88.944 .
  120. Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997). Vol. 24. pp. 49–56. CiteSeerX   10.1.1.15.240 .
  121. Sloan, P.; Kautz, J.; Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 2002). Vol. 29. pp. 527–536. Archived from the original (PDF) on 2011-07-24.
  122. Matusik, W.; Pfister, H.; Brand, M.; McMillan, L. (July 2003). "A Data-Driven Reflectance Model". ACM Transactions on Graphics (TOG). 22 (3): 759–769. doi:10.1145/882262.882343 . Retrieved 23 November 2024.
  123. Loper, Matthew M; Black, Michael J (6 September 2014). "OpenDR: An approximate differentiable renderer" (PDF). Computer Vision - ECCV 2014. Vol. 8695. Zurich, Switzerland: Springer International Publishing. pp. 154–169. doi:10.1007/978-3-319-10584-0_11. Archived (PDF) from the original on 24 June 2024. Retrieved 2 September 2024.
  124. Müller, Thomas; Gross, Markus; Novák, Jan (June 2017). "Practical Path Guiding for Efficient Light-Transport Simulation". Computer Graphics Forum (Proceedings of EGSR). 36 (4). The Eurographs Association & John Wiley & Sons, Ltd.: 91–100. doi:10.1111/cgf.13227 . Retrieved 4 September 2024.
  125. Bitterli, Benedikt; Wyman, Chris; Pharr, Matt; Shirley, Peter; Lefohn, Aaron; Jarosz, Wojciech (July 2020). "Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting". ACM Transactions on Graphics. 39 (4). doi:10.1145/3386569.3392481. Archived from the original on 1 March 2024. Retrieved 2 September 2024.

Further reading