Vertex pipeline

Last updated

The function of the vertex pipeline in any GPU is to take geometry data (usually supplied as vector points), work with it if needed with either fixed function processes (earlier DirectX), or a vertex shader program (later DirectX), and create all of the 3D data points in a scene to a 2D plane for display on a computer monitor.

Graphics processing unit specialized electronic circuit; graphics accelerator

A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.

DirectX collection of multimedia related APIs on Microsoft platforms

Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. The name DirectX was coined as a shorthand term for all of these APIs and soon became the name of the collection. When Microsoft later set out to develop a gaming console, the X was used as the basis of the name Xbox to indicate that the console was based on DirectX technology. The X initial has been carried forward in the naming of APIs designed for the Xbox such as XInput and the Cross-platform Audio Creation Tool (XACT), while the DirectX pattern has been continued for Windows APIs such as Direct2D and DirectWrite.

A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

It is possible to eliminate unneeded data from going through the rendering pipeline to cut out extraneous work (called view volume clipping and backface culling). After the vertex engine is done working with the geometry, all the 2D calculated data is sent to the pixel engine for further processing such as texturing and fragment shading.

As of DirectX 9c, the vertex processor is able to do the following by programming the vertex processing under the Direct X API:

Displacement mapping

Displacement mapping is an alternative computer graphics technique in contrast to bump mapping, normal mapping, and parallax mapping, using a (procedural-) texture- or height map to cause an effect where the actual geometric position of points over the textured surface are displaced, often along the local surface normal, according to the value the texture function evaluates to at each point on the surface. It gives surfaces a great sense of depth and detail, permitting in particular self-occlusion, self-shadowing and silhouettes; on the other hand, it is the most costly of this class of techniques owing to the large amount of additional geometry.

Related Research Articles

Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if it is available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties.

Cg is a high-level shading language developed by Nvidia in close collaboration with Microsoft for programming vertex and pixel shaders. Cg is based on the C programming language and although they share the same syntax, some features of C were modified and new data types were added to make Cg more suitable for programming graphics processing units. This language is only suitable for GPU programming and is not a general programming language. The Cg compiler outputs DirectX or OpenGL shader programs. Since 2012, Cg was deprecated, with no additional development or support available.

The R420 GPU, developed by ATI Technologies, was the company's basis for its 3rd-generation DirectX 9.0/OpenGL 2.0-capable graphics cards. Used first on the Radeon X800, the R420 was produced on a 0.13 micrometer low-K photolithography process and used GDDR-3 memory. The chip was designed for AGP graphics cards.

Geometric manipulation of modelling primitives, such as that performed by a geometry pipeline, is the first stage in computer graphics systems which perform image generation based on geometric models. While geometry pipelines were originally implemented in software, they have become highly amenable to hardware implementation, particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what has since become an increasingly commoditized function in contemporary image-synthetic raster display systems.

Shader subroutine that may run on a graphics processing unit and is used to do shading, special effects, post processing, or general purpose computation

In computer graphics, a shader is a type of computer program that was originally used for shading but which now performs a variety of specialized functions in various fields of computer graphics special effects or does video post-processing unrelated to shading, or even functions unrelated to graphics at all.

Emotion Engine Playstation 2 CPU

The Emotion Engine is a central processing unit developed and manufactured by Sony Computer Entertainment and Toshiba for use in the PlayStation 2 video game console. It was also used in early PlayStation 3 models sold in Japan and North America to provide PlayStation 2 game support. Mass production of the Emotion Engine began in 1999 and ended in late 2012 with the discontinuation of the PlayStation 2.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.

In computer graphics, a computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen. Once a 3D model has been created, for instance in a video game or any other 3D computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays.   Because the steps required for this operation depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases. However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator. These APIs abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators.

A shading language is a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "vector", "matrix", "color" and "normal". Due to the variety of target markets for 3D computer graphics, different shading languages have been developed.

Truevision3D

Truevision3D is a commercial computer software 3D engine first created by Sylvain Dupont in 1999.

High-Level Shading Language shading language

The High-Level Shader Language or High-Level Shading Language (HLSL) is a proprietary shading language developed by Microsoft for the Direct3D 9 API to augment the shader assembly language, and went on to become the required shading language for the unified shader model of Direct3D 10 and higher.

RSX Reality Synthesizer

The RSX 'Reality Synthesizer' is a proprietary graphics processing unit (GPU) codeveloped by Nvidia and Sony for the PlayStation 3 game console. It is a GPU based on the Nvidia 7800GTX graphics processor and, according to Nvidia, is a G70/G71 hybrid architecture with some modifications. The RSX has separate vertex and pixel shader pipelines. The GPU makes use of 256 MB GDDR3 RAM clocked at 650 MHz with an effective transmission rate of 1.4 GHz and up to 224 MB of the 3.2 GHz XDR main memory via the CPU . Although it carries the majority of the graphics processing, the Cell Broadband Engine, the console's CPU, is also used complementarily for some graphics-related computational loads of the console.

A texture mapping unit (TMU) is a component in modern graphics processing units (GPUs). Historically it was a separate physical processor. A TMU is able to rotate, resize, and distort a bitmap image, to be placed onto an arbitrary plane of a given 3D model as a texture. This process is called texture mapping. In modern graphics cards it is implemented as a discrete stage in a graphics pipeline, whereas when first introduced it was implemented as a separate processor, e.g. as seen on the Voodoo2 graphics card.

Hollywood (graphics chip) the GPU used in Nintendos Wii

Hollywood is the name of the graphics processing unit (GPU) used in Nintendo's Wii video game console. It was designed by ATI, and is manufactured using the same 90 nm CMOS process as the "Broadway" processor. Very few official details were released to the public by Nintendo, ATI, or IBM. The Hollywood GPU is reportedly based on the GameCube's "Flipper" GPU and is clocked 50% higher at 243 MHz, though none of the clock rates were ever confirmed by Nintendo, IBM, or ATI.

3D computer graphics graphics that use a three-dimensional representation of geometric data

3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

Unified shader model

In the field of 3D computer graphics, the Unified Shader Model refers to a form of shader hardware in a graphical processing unit (GPU) where all of the shader stages in the rendering pipeline have the same capabilities. They can all read textures and buffers, and they use instruction sets that are almost identical.

ARB assembly language is a low-level shading language, which can be characterized as an assembly language. It was created by the OpenGL Architecture Review Board (ARB) to standardize GPU instructions controlling the hardware graphics pipeline.

InfiniteReality refers to a 3D graphics hardware architecture and a family of graphics systems that implemented the aforementioned hardware architecture that was developed and manufactured by Silicon Graphics from 1996 to 2005. The InfiniteReality was positioned as Silicon Graphics' high-end visualization hardware for their MIPS/IRIX platform and was used exclusively in their Onyx family of visualization systems, which are sometimes referred to as "graphics supercomputers" or "visualization supercomputers". The InfiniteReality was marketed to and used by large organizations such as companies and universities that are involved in computer simulation, digital content creation, engineering and research.

This is a glossary of terms relating to computer graphics.