ARB assembly language is a low-level shading language, which can be characterized as an assembly language. It was created by the OpenGL Architecture Review Board (ARB) to standardize GPU instructions controlling the hardware graphics pipeline.
Texas Instruments created the first programmable graphics processor in 1985: the TMS34010, which allowed developers to load and execute code on the processor to control pixel output on a video display. This was followed by the TMS34020 and TMS34082 in 1989, providing programmable 3D graphics output.
Nvidia released its first video card NV1 in 1995, which supported quadratic texture mapping. This was followed by the Riva 128 (NV3) in 1997, providing the first hardware acceleration for Direct3D.
Various video card vendors released their own accelerated boards, each with their own instruction set for GPU operations. The OpenGL Architecture Review Board (ARB) was formed in 1992, in part to establish standards for the GPU industry.
The ARB and Nvidia established a number of OpenGL extensions to standardize GPU programming: [1]
This culminated with ARB's 2002 release of
These two extensions provided an industry standard for an assembly language that controlled the GPU pipeline for 3D vertex and interpolated pixel properties, respectively.
Subsequent high-level shading languages sometimes compile to this ARB standard. While 3D developers are now more likely to use a C-like, high-level shading language for GPU programming, ARB assembly has the advantage of being supported on a wide range of hardware.
Note however that some features, such as loops and conditionals, are not available in ARB assembly, and using them requires to adopt either the NV_gpu_program4 extension, or the GLSL shading language.
All major graphics card manufacturers have supported ARB assembly language for years, since the NVIDIA Geforce FX series, the AMD R300-based cards (Radeon 9500 series and higher), and the Intel GMA 900. [4] However, standard ARB assembly language is only at the level of Pixel Shader 2.0 and predates GLSL, so it has very few features. While nVidia has made proprietary extensions to ARB assembly languages that combine the fast compile speed of ARB assembly with modern OpenGL 3.x features, introduced with the GeForce 8 series, most non-nVidia OpenGL implementations do not provide the proprietary nVidia extensions to ARB assembly language [5] and do not offer any other way to access all the shader features directly in assembly, forcing the use of GLSL even for machine generated shaders where assembly would be more appropriate.
The ARB Vertex Program extension provides APIs to load ARBvp1.0 assembly instructions, enable selected programs, and to set various GPU parameters.
Vertex programs are used to modify vertex properties, such as position, normals and texture coordinates, that are passed to the next pipeline process: often a fragment shader; more recently, a geometry shader.
The ARB Fragment Program extension provides APIs to load ARBfp1.0 assembly instructions, enable selected programs, and to set various GPU parameters.
OpenGL fragments are interpolated pixel definitions. The GPU's vertex processor calculates all the pixels controlled by a set of vertices, interpolates their position and other properties and passes them onto its fragment process. Fragment programs allow developers to modify these pixel properties before they are rendered to a frame buffer for display.
All ARB assembly variables are float4 vectors, which may be addressed by xyzw or rgba suffixes.
Registers are scalar variables where only one element may be addressed.
ARB assembly supports the following suffixes for vertex attributes:
ARB assembly supports the following state matrices:
The following modifiers may be used:
ARB assembly supports the following instructions:
This is only a partial list of assembly instructions; a reference can be found here : Shader Assembly Language (ARB/NV) Quick Reference Guide for OpenGL.
ARB assembly provides no instructions for flow control or branching. SGE and SLT may be used to conditionally set or clear vectors or registers.
ARB interfaces provide no compiling step for assembly language.
GL_NV_fragment_program_option extends the ARB_fragment_program language with additional instructions. GL_NV_fragment_program2, GL_NV_vertex_program2_option and GL_NV_vertex_program3 extend it further.
!!ARBvp1.0 TEMP vertexClip; DP4 vertexClip.x, state.matrix.mvp.row[0], vertex.position; DP4 vertexClip.y, state.matrix.mvp.row[1], vertex.position; DP4 vertexClip.z, state.matrix.mvp.row[2], vertex.position; DP4 vertexClip.w, state.matrix.mvp.row[3], vertex.position; MOV result.position, vertexClip; MOV result.color, vertex.color; MOV result.texcoord[0], vertex.texcoord; END
!!ARBfp1.0 TEMP color; MUL color, fragment.texcoord[0].y, 2.0; ADD color, 1.0, -color; ABS color, color; ADD result.color, 1.0, -color; MOV result.color.a, 1.0; END
OpenGL is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
The High-Level Shader Language or High-Level Shading Language (HLSL) is a proprietary shading language developed by Microsoft for the Direct3D 9 API to augment the shader assembly language, and went on to become the required shading language for the unified shader model of Direct3D 10 and higher.
Core Image is a pixel-accurate, near-realtime, non-destructive image processing technology in Mac OS X. Implemented as part of the QuartzCore framework of Mac OS X 10.4 and later, Core Image provides a plugin-based architecture for applying filters and effects within the Quartz graphics rendering layer. The framework was later added to iOS in iOS 5.
In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.
OpenGL for Embedded Systems is a subset of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphones, tablet computers, video game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".
A shading language is a graphics programming language adapted to programming shader effects. Shading languages usually consist of special data types like "vector", "matrix", "color" and "normal".
Software rendering is the process of generating an image from a model by means of computer software. In the context of computer graphics rendering, software rendering refers to a rendering process that is not dependent upon graphics hardware ASICs, such as a graphics card. The rendering takes place entirely in the CPU. Rendering everything with the (general-purpose) CPU has the main advantage that it is not restricted to the (limited) capabilities of graphics hardware, but the disadvantage is that more transistors are needed to obtain the same speed.
OpenGL Shading Language (GLSL) is a high-level shading language with a syntax based on the C programming language. It was created by the OpenGL ARB to give developers more direct control of the graphics pipeline without having to use ARB assembly language or hardware-specific languages.
VPro, also known as Odyssey, is a computer graphics architecture for Silicon Graphics workstations. First released on the Octane2, it was subsequently used on the Fuel and Tezro workstations and the Onyx visualization systems, where it was branded InfinitePerformance.
Perl OpenGL (POGL) is a portable, compiled wrapper library that allows OpenGL to be used in the Perl programming language.
In the field of 3D computer graphics, the unified shader model refers to a form of shader hardware in a graphical processing unit (GPU) where all of the shader stages in the rendering pipeline have the same capabilities. They can all read textures and buffers, and they use instruction sets that are almost identical.
A vertex buffer object (VBO) is an OpenGL feature that provides methods for uploading vertex data to the video device for non-immediate-mode rendering. VBOs offer substantial performance gains over immediate mode rendering primarily because the data reside in video device memory rather than system memory and so it can be rendered directly by the video device. These are equivalent to vertex buffers in Direct3D.
In the field of 3D computer graphics, Multiple Render Targets, or MRT, is a feature of modern graphics processing units (GPUs) that allows the programmable rendering pipeline to render images to multiple render target textures at once. These textures can then be used as inputs to other shaders or as texture maps applied to 3D models. Introduced by OpenGL 2.0 and Direct3D 9, MRT can be invaluable to real-time 3D applications such as video games. Before the advent of MRT, a programmer would have to issue a command to the GPU to draw the 3D scene once for each render target texture, resulting in redundant vertex transformations which, in a real-time program expected to run as fast as possible, can be quite time-consuming. With MRT, a programmer creates a pixel shader that returns an output value for each render target. This pixel shader then renders to all render targets with a single draw command.
PICA200 is a graphics processing unit (GPU) designed by Digital Media Professionals Inc. (DMP), a Japanese GPU design startup company, for use in embedded devices such as vehicle systems, mobile phones, cameras, and game consoles. The PICA200 is an IP Core which can be licensed to other companies to incorporate into their SOCs. It was most notably licensed for use in the Nintendo 3DS.
TeraScale is the codename for a family of graphics processing unit microarchitectures developed by ATI Technologies/AMD and their second microarchitecture implementing the unified shader model following Xenos. TeraScale replaced the old fixed-pipeline microarchitectures and competed directly with Nvidia's first unified shader microarchitecture named Tesla.
Stage3D is an Adobe Flash Player API for rendering interactive 3D graphics with GPU-acceleration, within Flash games and applications. Flash Player or AIR applications written in ActionScript 3 may use Stage3D to render 3D graphics, and such applications run natively on Windows, Mac OS X, Linux, Apple iOS and Google Android. Stage3D is similar in purpose and design to WebGL.
Standard Portable Intermediate Representation (SPIR) is an intermediate language for parallel computing and graphics by Khronos Group. It is used in multiple execution environments, including the Vulkan graphics API and the OpenCL compute API, to represent a shader or kernel. It is also used as an interchange language for cross compilation.
This is a glossary of terms relating to computer graphics.
Cg and High-Level Shader Language (HLSL) are two names given to a high-level shading language developed by Nvidia and Microsoft for programming shaders. Cg/HLSL is based on the C programming language and although they share the same core syntax, some features of C were modified and new data types were added to make Cg/HLSL more suitable for programming graphics processing units.