Fragment processing is a term in computer graphics referring to a collection of operations applied to fragments generated by the rasterization operation in the rendering pipeline.
During the rendering of computer graphics, the rasterization step takes a primitive, described by its vertex coordinates with associated color and texture information, and converts it into a set of fragments. These fragments then undergo a series of processing steps, e.g. scissor test, alpha test, depth test, stencil test, blending, texture mapping and so on. These steps are collectively referred to as fragment processing. [1] [2] [3]
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
OpenGL is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
Rasterization is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or the conversion of 2D rendering primitives such as polygons, line segments into a rasterized format.
Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if it is available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties.
Franklin C. (Frank) Crow is a computer scientist who has made important contributions to computer graphics, including some of the first practical spatial anti-aliasing techniques. Crow also proposed the shadow volume technique for generating geometrically accurate shadows.
In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
Direct3D and OpenGL are competing application programming interfaces (APIs) which can be used in applications to render 2D and 3D computer graphics. As of 2005, graphics processing units (GPUs) almost always implement one version of both of these APIs. Examples include: DirectX 9 and OpenGL 2 circa 2004; DirectX 10 and OpenGL 3 circa 2008; and most recently, DirectX 11 and OpenGL 4 circa 2011. GPUs that support more recent versions of the standards are backward compatible with applications that use the older standards; for example, one can run older DirectX 9 games on a more recent DirectX 11-certified GPU.
In computer graphics, a shader is a type of computer program originally used for shading in 3D scenes. They now perform a variety of specialized functions in various fields within the category of computer graphics special effects, or else do video post-processing unrelated to shading, or even perform functions unrelated to graphics.
OpenGL for Embedded Systems is a subset of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphones, tablet computers, video game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".
Java OpenGL (JOGL) is a wrapper library that allows OpenGL to be used in the Java programming language. It was originally developed by Kenneth Bradley Russell and Christopher John Kline, and was further developed by the Sun Microsystems Game Technology Group. Since 2010, it has been an independent open-source project under a BSD license. It is the reference implementation for Java Bindings for OpenGL (JSR-231).
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.
A shading language is a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "vector", "matrix", "color" and "normal". Due to the variety of target markets for 3D computer graphics, different shading languages have been developed.
Software rendering is the process of generating an image from a model by means of computer software. In the context of computer graphics rendering, software rendering refers to a rendering process that is not dependent upon graphics hardware ASICs, such as a graphics card. The rendering takes place entirely in the CPU. Rendering everything with the (general-purpose) CPU has the main advantage that it is not restricted to the (limited) capabilities of graphics hardware, but the disadvantage is that more semiconductors are needed to obtain the same speed.
Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.
Kurt Akeley is an American computer graphics engineer.
Multisample anti-aliasing (MSAA) is a type of spatial anti-aliasing, a technique used in computer graphics to remove jaggies.
Tiled rendering is the process of subdividing a computer graphics image by a regular grid in optical space and rendering each section of the grid, or tile, separately. The advantage to this design is that the amount of memory and bandwidth is reduced compared to immediate mode rendering systems that draw the entire frame at once. This has made tile rendering systems particularly common for low-power handheld device use. Tiled rendering is sometimes known as a "sort middle" architecture, because it performs the sorting of the geometry in the middle of the graphics pipeline instead of near the end.
Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
InfiniteReality refers to a 3D graphics hardware architecture and a family of graphics systems that implemented the aforementioned hardware architecture that was developed and manufactured by Silicon Graphics from 1996 to 2005. The InfiniteReality was positioned as Silicon Graphics' high-end visualization hardware for their MIPS/IRIX platform and was used exclusively in their Onyx family of visualization systems, which are sometimes referred to as "graphics supercomputers" or "visualization supercomputers". The InfiniteReality was marketed to and used by large organizations such as companies and universities that are involved in computer simulation, digital content creation, engineering and research.
This is a glossary of terms relating to computer graphics.