Rasterisation

Last updated
Raster graphic image Raster graphic fish 20x23squares sdtv-example.png
Raster graphic image

In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes). [1] [2] The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or to the conversion of 2D rendering primitives, such as polygons and line segments, into a rasterized format.

Contents

Etymology

The term "rasterisation" comes from German Raster 'grid, pattern, schema',and Latin rāstrum  'scraper, rake'. [3] [4]

2D images

Line primitives

Bresenham's line algorithm is an example of an algorithm used to rasterize lines.

Circle primitives

Algorithms such as Midpoint circle algorithm are used to render circle onto a pixelated canvas.

3D images

Rasterization is one of the typical techniques of rendering 3D models. Compared with other rendering techniques such as ray tracing, rasterization is extremely fast and therefore used in most realtime 3D engines. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. The specific color of each pixel is assigned by a pixel shader (which in modern GPUs is completely programmable). Shading may take into account physical effects such as light position, their approximations or purely artistic intent.

The process of rasterizing 3D models onto a 2D plane for display on a computer screen ("screen space") is often carried out by fixed function (non-programmable) hardware within the graphics pipeline. This is because there is no motivation for modifying the techniques for rasterization used at render time [5] and a special-purpose system allows for high efficiency.

Triangle rasterization

Rasterizing triangles using the top-left rule Top-left triangle rasterization rule.gif
Rasterizing triangles using the top-left rule

Polygons are a common representation of digital 3D models. Before rasterization, individual polygons are typically broken down into triangles; therefore, a typical problem to solve in 3D rasterization is rasterization of a triangle. Properties that are usually required from triangle rasterization algorithms are that rasterizing two adjacent triangles (i.e. those that share an edge)

  1. leaves no holes (non-rasterized pixels) between the triangles, so that the rasterized area is completely filled (just as the surface of adjacent triangles). And
  2. no pixel is rasterized more than once, i.e. the rasterized triangles don't overlap. This is to guarantee that the result doesn't depend on the order in which the triangles are rasterized. Overdrawing pixels can also mean wasting computing power on pixels that would be overwritten.

This leads to establishing rasterization rules to guarantee the above conditions. One set of such rules is called a top-left rule, which states that a pixel is rasterized if and only if

  1. its center lies completely inside the triangle. Or
  2. its center lies exactly on the triangle edge (or multiple edges in case of corners) that is (or, in case of corners, all are) either top or left edge.

A top edge is an edge that is exactly horizontal and lies above other edges, and a left edge is a non-horizontal edge that is on the left side of the triangle.

This rule is implemented e.g. by Direct3D [6] and many OpenGL implementations (even though the specification doesn't define it and only requires a consistent rule [7] ).

Quality

Pixel precision (left) vs sub-pixel precision (middle) vs anti-aliasing (right) Line pixel subpixel aa.gif
Pixel precision (left) vs sub-pixel precision (middle) vs anti-aliasing (right)

The quality of rasterization can be improved by antialiasing, which creates "smooth" edges. Sub-pixel precision is a method which takes into account positions on a finer scale than the pixel grid and can produce different results even if the endpoints of a primitive fall into same pixel coordinates, producing smoother movement animations. Simple or older hardware, such as PlayStation 1, lacked sub-pixel precision in 3D rasterization. [8]

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Raster graphics</span> Matrix-based data structure

In computer graphics and digital photography, a raster graphic represents a two-dimensional picture as a rectangular matrix or grid of square pixels, viewable via a computer display, paper, or other display medium. A raster is technically characterized by the width and height of the image in pixels and by the number of bits per pixel. Raster images are stored in image files with varying dissemination, production, generation, and acquisition formats.

<span class="mw-page-title-main">Vector graphics</span> Computer graphics images defined by points, lines and curves

Vector graphics are a form of computer graphics in which visual images are created directly from geometric shapes defined on a Cartesian plane, such as points, lines, curves and polygons. The associated mechanisms may include vector display and printing hardware, vector data models and file formats, as well as the software based on these data models. Vector graphics is an alternative to raster or bitmap graphics, with each having advantages and disadvantages in specific situations.

<span class="mw-page-title-main">Scanline rendering</span> 3D computer graphics image rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if it is available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties.

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

Raster may refer to:

<span class="mw-page-title-main">Voxel</span> Element representing a value on a grid in three dimensional space

In 3D computer graphics, a voxel represents a value on a regular grid in three-dimensional space. As with pixels in a 2D bitmap, voxels themselves do not typically have their position explicitly encoded with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Hidden-surface determination</span> Visibility in 3D computer graphics

In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

Geometric manipulation of modelling primitives, such as that performed by a geometry pipeline, is the first stage in computer graphics systems which perform image generation based on geometric models. While geometry pipelines were originally implemented in software, they have become highly amenable to hardware implementation, particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what has since become an increasingly commoditized function in contemporary image-synthetic raster display systems.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

The computer graphics pipeline, also known as the rendering pipeline or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, whether it's for a video game or any other form of 3D computer animation, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D and OpenGL, were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.

<span class="mw-page-title-main">3D computer graphics</span> Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

<span class="mw-page-title-main">Tessellation (computer graphics)</span> Computer graphics terminology

In computer graphics, tessellation is the dividing of datasets of polygons presenting objects in a scene into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11.

This is a glossary of terms relating to computer graphics.

References

  1. Michael F. Worboys (30 October 1995). GIS: A Computer Science Perspective. CRC Press. pp. 232–. ISBN   978-0-7484-0065-2.
  2. Kang-Tsung Chang (27 August 2007). Programming ArcObjects with VBA: A Task-Oriented Approach, Second Edition. CRC Press. pp. 91–. ISBN   978-1-4200-0918-7.
  3. Harper, Douglas. "raster". Online Etymology Dictionary .
  4. rastrum . Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project .
  5. "Rasterization: a Practical Implementation". www.scratchapixel.com. Retrieved 2023-10-06.
  6. "Rasterization Rules (Direct3D 9)". Microsoft Docs. Retrieved 19 April 2020.
  7. OpenGL 4.6 (PDF). p. 478.
  8. "PlayStation rasterization issues". Libretro. 4 October 2016. Retrieved 19 April 2020.