Irregular Z-buffer

Last updated

The irregular Z-buffer is an algorithm designed to solve the visibility problem in real-time 3-d computer graphics. It is related to the classical Z-buffer in that it maintains a depth value for each image sample and uses these to determine which geometric elements of a scene are visible. The key difference, however, between the classical Z-buffer and the irregular Z-buffer is that the latter allows arbitrary placement of image samples in the image plane, whereas the former requires samples to be arranged in a regular grid.

Contents

These depth samples are explicitly stored in a two-dimensional spatial data structure. During rasterization, triangles are projected onto the image plane as usual, and the data structure is queried to determine which samples overlap each projected triangle. Finally, for each overlapping sample, the standard Z-compare and (conditional) frame buffer update are performed.

Implementation

The classical rasterization algorithm projects each polygon onto the image plane, and determines which sample points from a regularly spaced set lie inside the projected polygon. Since the locations of these samples (i.e. pixels) are implicit, this determination can be made by testing the edges against the implicit grid of sample points. If, however the locations of the sample points are irregularly spaced and cannot be computed from a formula, then this approach does not work. The irregular Z-buffer solves this problem by storing sample locations explicitly in a two-dimensional spatial data structure, and later querying this structure to determine which samples lie within a projected triangle. This latter step is referred to as "irregular rasterization".

Although the particular data structure used may vary from implementation to implementation, the two studied approaches are the kd-tree, and a grid of linked lists. A balanced kd-tree implementation has the advantage that it guarantees O(log(N)) access. Its chief disadvantage is that parallel construction of the kd-tree may be difficult, and traversal requires expensive branch instructions. The grid of lists has the advantage that it can be implemented more effectively on GPU hardware, which is designed primarily for the classical Z-buffer.

With the appearance of CUDA, the programmability of current graphics hardware has been drastically improved. The Master Thesis, "Fast Triangle Rasterization using irregular Z-buffer on CUDA" (see External Links), provide a complete description to an irregular Z-Buffer based shadow mapping software implementation on CUDA. The rendering system is running completely on GPUs. It is capable of generating aliasing-free shadows at a throughput of dozens of million triangles per second.

Applications

The irregular Z-buffer can be used for any application which requires visibility calculations at arbitrary locations in the image plane. It has been shown to be particularly adept at shadow mapping, an image space algorithm for rendering hard shadows. In addition to shadow rendering, potential applications include adaptive anti-aliasing, jittered sampling, and environment mapping.

See also

Related Research Articles

Rendering (computer graphics) Process of generating an image from a model

Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of computer programs. Also, the results of displaying such a model can be called a render. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.

Scanline rendering rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Rasterisation process of describing an image in terms of a pixel or voxel grid (named after the Latin word for rake, as the picture/volumetric elements are usually arranged in lines like those produced by a rake)

Rasterisation is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterised image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterisation may refer to either the conversion of models into raster files, or the conversion of 2D rendering primitives such as polygons or line segments into a rasterized format.

In digital signal processing, spatial anti-aliasing is a technique for minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications.

Texture mapping method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model

Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. The original technique was pioneered by Edwin Catmull in 1974.

Shadow volume

Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not.

In 3D computer graphics, shown-surface determination is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. The analogue for line rendering is hidden-line removal. Hidden-surface determination is necessary to render an image correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

Volume rendering set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

In computer graphics, texture filtering or texture smoothing is the method used to determine the texture color for a texture mapped pixel, using the colors of nearby texels. There are two main categories of texture filtering, magnification filtering and minification filtering. Depending on the situation texture filtering is either a type of reconstruction filter where sparse data is interpolated to fill gaps (magnification), or a type of anti-aliasing (AA), where texture samples exist at a higher frequency than required for the sample frequency needed for texture fill (minification). Put simply, filtering describes how a texture is applied at many different shapes, size, angles and scales. Depending on the chosen filter algorithm the result will show varying degrees of blurriness, detail, spatial aliasing, temporal aliasing and blocking. Depending on the circumstances filtering can be performed in software or in hardware for real time or GPU accelerated rendering or in a mixture of both. For most common interactive graphical applications modern texture filtering is performed by dedicated hardware which optimizes memory access through memory cacheing and pre-fetch and implements a selection of algorithms available to the user and developer.

Shader subroutine that may run on a graphics processing unit and is used to do shading, special effects, post processing, or general purpose computation

In computer graphics, a shader is a type of computer program originally used for shading in 3D scenes, but which now performs a variety of specialized functions in various fields within the category of computer graphics special effects, or else does video post-processing unrelated to shading, or even performs functions unrelated to graphics at all.

Real-time computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Shadow mapping

Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces." Since then, it has been used both in pre-rendered and realtime scenes in many console and PC games.

Ray-tracing hardware is special-purpose computer hardware designed for accelerating ray tracing calculations.

Multisample anti-aliasing (MSAA) is a type of spatial anti-aliasing, a technique used in computer graphics to improve image quality.

Tiled rendering is the process of subdividing a computer graphics image by a regular grid in optical space and rendering each section of the grid, or tile, separately. The advantage to this design is that the amount of memory and bandwidth is reduced compared to immediate mode rendering systems that draw the entire frame at once. This has made tile rendering systems particularly common for low-power handheld device use. Tiled rendering is sometimes known as a "sort middle" architecture, because it performs the sorting of the geometry in the middle of the graphics pipeline instead of near the end.

OptiX ray tracing API

Nvidia OptiX is a ray tracing API. The computations are offloaded to the GPUs through either the low-level or the high-level API introduced with CUDA. CUDA is only available for Nvidia's graphics products. Nvidia OptiX is part of Nvidia GameWorks. OptiX is a high-level, or "to-the-algorithm" API, meaning that it is designed to encapsulate the entire algorithm of which ray tracing is a part, not just the ray tracing itself. This is meant to allow the OptiX engine to execute the larger algorithm with great flexibility without application-side changes.

This is a glossary of terms relating to computer graphics.

Nvidia RTX is a graphics rendering development platform created by Nvidia, primarily aimed at enabling real time ray tracing. Historically, ray tracing had been reserved to non-real time applications, with video games having to rely on rasterization for their rendering. RTX facilitates a new development in computer graphics of generating interactive images that react to lighting, shadows, reflections. RTX runs on Nvidia Volta- and Turing-based GPUs, specifically utilizing the Tensor cores on the architectures for ray tracing acceleration.