In 3D computer graphics, Potentially Visible Sets are used to accelerate the rendering of 3D environments. They are a form of occlusion culling, whereby a candidate set of potentially visible polygons are pre-computed, then indexed at run-time in order to quickly obtain an estimate of the visible geometry. The term PVS is sometimes used to refer to any occlusion culling algorithm (since in effect, this is what all occlusion algorithms compute), although in almost all the literature, it is used to refer specifically to occlusion culling algorithms that pre-compute visible sets and associate these sets with regions in space. In order to make this association, the camera's view-space (the set of points from which the camera can render an image) is typically subdivided into (usually convex) regions and a PVS is computed for each region.
The benefit of offloading visibility as a pre-process are:
The disadvantages are:
The primary problem in PVS computation then becomes: Compute the set of polygons that can be visible from anywhere inside each region of a set of polyhedral regions.
There are various classifications of PVS algorithms with respect to the type of visibility set they compute. [1] [2]
These overestimate visibility consistently, such that no triangle that is visible may be omitted. The net result is that no image error is possible, however, it is possible to greatly overestimate visibility, leading to inefficient rendering (due to the rendering of invisible geometry). The focus on conservative algorithm research is maximizing occluder fusion in order to reduce this overestimation. The list of publications on this type of algorithm is extensive – good surveys on this topic include Cohen-Or et al. [2] and Durand. [3]
These underestimate visibility consistently, such that no redundant (invisible) polygons exist in the PVS set, although it may be possible to miss a polygon that is actually visible leading to image errors. The focus on aggressive algorithm research is to reduce the potential error. [4] [5]
These can result in both redundancy and image error. [6]
These provide optimal visibility sets, where there is no image error and no redundancy. They are, however, complex to implement and typically run a lot slower than other PVS based visibility algorithms. Teller computed exact visibility for a scene subdivided into cells and portals [7] (see also portal rendering).
The first general tractable 3D solutions were presented in 2002 by Nirenstein et al. [1] and Bittner. [8] Haumont et al. [9] improve on the performance of these techniques significantly. Bittner et al. [10] solve the problem for 2.5D urban scenes. Although not quite related to PVS computation, the work on the 3D Visibility Complex and 3D Visibility Skeleton by Durand [3] provides an excellent theoretical background on analytic visibility.
Visibility in 3D is inherently a 4-Dimensional problem. To tackle this, solutions are often performed using Plücker coordinates, which effectively linearize the problem in a 5D projective space. Ultimately, these problems are solved with higher-dimensional constructive solid geometry.
Some interesting secondary problems include:
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.
The painter’s algorithm is an algorithm for visible surface determination in 3D computer graphics that works on a polygon-by-polygon basis rather than a pixel-by-pixel, row by row, or area by area basis of other Hidden-Surface Removal algorithms. The painter’s algorithm creates images by sorting the polygons within the image by their depth and placing each polygon in order from the farthest to the closest object.
In computer science, binary space partitioning (BSP) is a method for space partitioning which recursively subdivides a Euclidean space into two convex sets by using hyperplanes as partitions. This process of subdividing gives rise to a representation of objects within the space in the form of a tree data structure known as a BSP tree.
A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to represent depth information of objects in 3D space from a particular perspective. Depth buffers are an aid to rendering a scene to ensure that the correct polygons properly occlude other polygons. Z-buffering was first described in 1974 by Wolfgang Straßer in his PhD thesis on fast algorithms for rendering occluded objects. A similar solution to determining overlapping polygons is the painter's algorithm, which is capable of handling non-opaque scene elements, though at the cost of efficiency and incorrect results.
Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not.
In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.
In 3D computer graphics, solid objects are usually modeled by polyhedra. A face of a polyhedron is a planar polygon bounded by straight line segments, called edges. Curved surfaces are usually approximated by a polygon mesh. Computer programs for line drawings of opaque objects must be able to decide which edges or which parts of the edges are hidden by an object itself or by other objects, so that those edges can be clipped during rendering. This problem is known as hidden-line removal.
In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.
In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.
Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.
In computer-generated imagery and real-time 3D computer graphics, portal rendering is an algorithm for visibility determination. For example, consider a 3D computer game environment, which may contain many polygons, only a few of which may be visible on screen at a given time. By determining which polygons are currently not visible, and not rendering those objects, significant performance improvements can be achieved.
Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
Umbra is a graphics software technology company founded 2007 in Helsinki, Finland. Umbra specializes in occlusion culling, visibility solution technology and provides middleware for video games running on Windows, Linux, iOS, PlayStation 4, Xbox One, PlayStation 3, Xbox 360, Wii U, handheld consoles, and other platforms. In 2021, Amazon acquired Umbra.
Order-independent transparency (OIT) is a class of techniques in rasterisational computer graphics for rendering transparency in a 3D scene, which do not require rendering geometry in sorted order for alpha compositing.
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
This is a glossary of terms relating to computer graphics.
Bedrich Benes is a computer scientist and a researcher in computer graphics.
The GigaMesh Software Framework is a free and open-source software for display, editing and visualization of 3D-data typically acquired with structured light or structure from motion.
{{cite book}}
: |journal=
ignored (help)Cited author's pages (including publications):
Other links: