Geomipmapping

Last updated

Geomipmapping or geometrical mipmapping is a real-time block-based terrain rendering algorithm developed by W.H. de Boer in 2000 that aims to reduce CPU processing time which is a common bottleneck in level of detail approaches to terrain rendering.

Prior to geomipmapping, techniques such as quadtree rendering were used to divide the terrain into square tiles created by binary division with quadratically diminishing size. The subdivision step is typically performed on the CPU which creates a bottleneck as geometry commands are buffered to the GPU. Unlike quadtrees which send 1x1 polygon units to the GPU, to reduce the CPU processing time geomipmapping divides the terrain into grid-based tiles which are themselves regularly subdivided. Typically, a fixed number of vertex buffer objects (VBOs) are stored on the GPU at different grid resolutions, such as 10x10 and 20x20, and then placed at major terrain regions selectively chosen by the CPU. A vertex shader is then used to reposition the vertices for a given VBO, all on the GPU. Overall, this results in a major reduction in CPU processing, and reduced CPU-to-GPU bandwidth as the GPU then performs most of the work. Geoclipmaps and GPU raycasting are two other modern alternatives to geomipmapping for interactive rendering of terrain.

See also

Related Research Articles

Scanline rendering 3D computer graphics image rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if it is available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties.

Texture mapping

Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. The original technique was pioneered by Edwin Catmull in 1974.

Framebuffer

A framebuffer is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.

Graphics processing unit Specialized electronic circuit; graphics accelerator

A graphics processing unit (GPU) is a specialized, electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die.

Hidden-surface determination Visibility in 3D computer graphics

In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

Shader Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a type of computer program originally used for shading in 3D scenes. They now perform a variety of specialized functions in various fields within the category of computer graphics special effects, or else do video post-processing unrelated to shading, or even perform functions unrelated to graphics at all.

In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

Real-time optimally adapting mesh (ROAM), is a continuous level of detail algorithm that optimizes terrain meshes. On modern computers, sometimes it is more effective to send a small amount of unneeded polygons to the GPU, rather than burden the CPU with LOD calculations—making algorithms like geomipmapping more effective than ROAM. This technique is used by graphics programmers in order to produce high quality displays while being able to maintain real-time frame rates. Algorithms such as ROAM exist to provide a control over scene quality versus performance in order to provide HQ scenes while retaining real-time frame rates on hardware. ROAM largely aims toward terrain visualization, but various elements from ROAM are difficult to place within a game system.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.

In computer graphics, a computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen. Once a 3D model has been created, for instance in a video game or any other 3D computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays.   Because the steps required for this operation depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases. However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator. These APIs abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators.

Reyes rendering

Reyes rendering is a computer software architecture used in 3D computer graphics to render photo-realistic images. It was developed in the mid-1980s by Loren Carpenter and Robert L. Cook at Lucasfilm's Computer Graphics Research Group, which is now Pixar. It was first used in 1982 to render images for the Genesis effect sequence in the movie Star Trek II: The Wrath of Khan. Pixar's RenderMan is one implementation of the Reyes algorithm. According to the original paper describing the algorithm, the Reyes image rendering system is "An architecture for fast high-quality rendering of complex images." Reyes was proposed as a collection of algorithms and data processing systems. However, the terms "algorithm" and "architecture" have come to be used synonymously and are used interchangeably in this article.

A texture mapping unit (TMU) is a component in modern graphics processing units (GPUs). Historically it was a separate physical processor. A TMU is able to rotate, resize, and distort a bitmap image, to be placed onto an arbitrary plane of a given 3D model as a texture. This process is called texture mapping. In modern graphics cards it is implemented as a discrete stage in a graphics pipeline, whereas when first introduced it was implemented as a separate processor, e.g. as seen on the Voodoo2 graphics card.

Tiled rendering is the process of subdividing a computer graphics image by a regular grid in optical space and rendering each section of the grid, or tile, separately. The advantage to this design is that the amount of memory and bandwidth is reduced compared to immediate mode rendering systems that draw the entire frame at once. This has made tile rendering systems particularly common for low-power handheld device use. Tiled rendering is sometimes known as a "sort middle" architecture, because it performs the sorting of the geometry in the middle of the graphics pipeline instead of near the end.

Perl OpenGL

Perl OpenGL (POGL) is a portable, compiled wrapper library that allows OpenGL to be used in the Perl programming language.

Terrain rendering

Terrain rendering covers a variety of methods of depicting real-world or imaginary world surfaces. Most common terrain rendering is the depiction of Earth's surface.

Talisman was a Microsoft project to build a new 3D graphics architecture based on quickly compositing 2D "sub-images" onto the screen, an adaptation of tiled rendering. In theory, this approach would dramatically reduce the amount of memory bandwidth required for 3D games and thereby lead to lower-cost graphics accelerators. The project took place during the introduction of the first high-performance 3D accelerators, and these quickly surpassed Talisman in both performance and price. No Talisman-based systems were ever released commercially, and the project was eventually cancelled in the late 1990s.

A vertex buffer object (VBO) is an OpenGL feature that provides methods for uploading vertex data to the video device for non-immediate-mode rendering. VBOs offer substantial performance gains over immediate mode rendering primarily because the data resides in the video device memory rather than the system memory and so it can be rendered directly by the video device. These are equivalent to vertex buffers in Direct3D.

Mantle (API)

Mantle was a low-overhead rendering API targeted at 3D video games. AMD originally developed Mantle in cooperation with DICE, starting in 2013. Mantle was designed as an alternative to Direct3D and OpenGL, primarily for use on personal computers, although Mantle supports the GPUs present in the PlayStation 4 and in the Xbox One. In 2015, Mantle's public development was suspended and in 2019 completely discontinued, as DirectX 12 and the Mantle-derived Vulkan rose in popularity.

This is a glossary of terms relating to computer graphics.

References