Parallel rendering

Last updated

Parallel rendering (or distributed rendering) is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Recent research has also suggested that parallel rendering can be applied to mobile gaming to decrease power consumption and increase graphical fidelity. [1] Rendering is an embarrassingly parallel workload in multiple domains (e.g., pixels, objects, frames) and thus has been the subject of much research.

Contents

Workload distribution

There are two, often competing, reasons for using parallel rendering. Performance scaling allows frames to be rendered more quickly while data scaling allows larger data sets to be visualized. Different methods of distributing the workload tend to favor one type of scaling over the other. There can also be other advantages and disadvantages such as latency and load balancing issues. The three main options for primitives to distribute are entire frames, pixels, or objects (e.g. triangle meshes).

Frame distribution

Each processing unit can render an entire frame from a different point of view or moment in time. The frames rendered from different points of view can improve image quality with anti-aliasing or add effects like depth-of-field and three-dimensional display output. This approach allows for good performance scaling but no data scaling.

When rendering sequential frames in parallel there will be a lag for interactive sessions. The lag between user input and the action being displayed is proportional to the number of sequential frames being rendered in parallel.

Pixel distribution

Sets of pixels in the screen space can be distributed among processing units in what is often referred to as sort first rendering. [2]

Distributing interlaced lines of pixels gives good load balancing but makes data scaling impossible. Distributing contiguous 2D tiles of pixels allows for data scaling by culling data with the view frustum. However, there is a data overhead from objects on frustum boundaries being replicated and data has to be loaded dynamically as the view point changes. Dynamic load balancing is also needed to maintain performance scaling.

Object distribution

Distributing objects among processing units is often referred to as sort last rendering. [3] It provides good data scaling and can provide good performance scaling, but it requires the intermediate images from processing nodes to be alpha composited to create the final image. As the image resolution grows, the alpha compositing overhead also grows.

A load balancing scheme is also needed to maintain performance regardless of the viewing conditions. This can be achieved by over partitioning the object space and assigning multiple pieces to each processing unit in a random fashion, however this increases the number of alpha compositing stages required to create the final image. Another option is to assign a contiguous block to each processing unit and update it dynamically, but this requires dynamic data loading.

Hybrid distribution

The different types of distributions can be combined in a number of fashions. A couple of sequential frames can be rendered in parallel while also rendering each of those individual frames in parallel using a pixel or object distribution. Object distributions can try to minimize their overlap in screen space in order to reduce alpha compositing costs, or even use a pixel distribution to render portions of the object space.

Open source applications

The open source software package Chromium provides a parallel rendering mechanism for existing applications. It intercepts the OpenGL calls and processes them, typically to send them to multiple rendering units driving a display wall.

Equalizer is an open source rendering framework and resource management system for multipipe applications. Equalizer provides an API to write parallel, scalable visualization applications which are configured at run-time by a resource server. [4]

OpenSG is an open source scenegraph system that provides parallel rendering capabilities, especially on clusters. It hides the complexity of parallel multi-threaded and clustered applications and supports sort-first as well as sort-last rendering. [5]

Golem is an open source decentralized application used for parallel computing that currently works with rendering in Blender and has plans to incorporate more uses. [6]

See also

Concepts
Implementations

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Digital compositing</span>

Digital compositing is the process of digitally assembling multiple images to make a final image, typically for print, motion pictures or screen display. It is the digital analogue of optical film compositing.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Hidden-surface determination</span> Visibility in 3D computer graphics

In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

<span class="mw-page-title-main">Visualization (graphics)</span> Set of techniques for creating images, diagrams, or animations to communicate a message

Visualization or visualisation is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

An image server is web server software which specializes in delivering images. However, not all image servers support HTTP or can be used on web sites.

<span class="mw-page-title-main">Scalable Link Interface</span> Brand name; multi-GPU technology by Nvidia

Scalable Link Interface (SLI) is a brand name for a deprecated multi-GPU technology developed by Nvidia for linking two or more video cards together to produce a single output. SLI is a parallel processing algorithm for computer graphics, meant to increase the available processing power.

In computer graphics, a computer graphics pipeline, rendering pipeline or simply graphics pipeline, is a conceptual model that describes what steps a graphics system needs to perform to render a 3D scene to a 2D screen. Once a 3D model has been created, for instance in a video game or any other 3D computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays.   Because the steps required for this operation depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases. However, graphics application programming interfaces (APIs) such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator. These APIs abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators.

<span class="mw-page-title-main">Real-time computer graphics</span> Sub-field of computer graphics

Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.

Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume are removed.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Desktop Window Manager is the compositing window manager in Microsoft Windows since Windows Vista that enables the use of hardware acceleration to render the graphical user interface of Windows.

In computer graphics, the render output unit (ROP) or raster operations pipeline is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value; this process is called rasterization. Thus, ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together. Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs.

VirtualGL is an open-source software package that redirects the 3D rendering commands from Unix and Linux OpenGL applications to 3D accelerator hardware in a dedicated server and sends the rendered output to a (thin) client located elsewhere on the network. On the server side, VirtualGL consists of a library that handles the redirection and a wrapper program that instructs applications to use this library. Clients can connect to the server either using a remote X11 connection or using an X11 proxy such as a VNC server. In case of an X11 connection some client-side VirtualGL software is also needed to receive the rendered graphics output separately from the X11 stream. In case of a VNC connection no specific client-side software is needed other than the VNC client itself.

<span class="mw-page-title-main">ParaView</span> Scientific visualization software

ParaView is an open-source multiple-platform application for interactive, scientific visualization. It has a client–server architecture to facilitate remote visualization of datasets, and generates level of detail (LOD) models to maintain interactive frame rates for large datasets. It is an application built on top of the Visualization Toolkit (VTK) libraries. ParaView is an application designed for data parallelism on shared-memory or distributed-memory multicomputers and clusters. It can also be run as a single-computer application.

Tiled rendering is the process of subdividing a computer graphics image by a regular grid in optical space and rendering each section of the grid, or tile, separately. The advantage to this design is that the amount of memory and bandwidth is reduced compared to immediate mode rendering systems that draw the entire frame at once. This has made tile rendering systems particularly common for low-power handheld device use. Tiled rendering is sometimes known as a "sort middle" architecture, because it performs the sorting of the geometry in the middle of the graphics pipeline instead of near the end.

Order-independent transparency (OIT) is a class of techniques in rasterisational computer graphics for rendering transparency in a 3D scene, which do not require rendering geometry in sorted order for alpha compositing.

<span class="mw-page-title-main">Natron (software)</span> Open source compositing software

Natron is a free and open-source node-based compositing application. It has been influenced by digital compositing software such as Avid Media Illusion, Apple Shake, Blackmagic Fusion, Autodesk Flame and Nuke, from which its user interface and many of its concepts are derived.

This is a glossary of terms relating to computer graphics.

References

  1. Wu, C.; Yang, B.; Zhu, W.; Zhang, Y. (2017). "Toward High Mobile GPU Performance through Collaborative Workload Offloading". IEEE Transactions on Parallel and Distributed Systems. PP (99): 435–449. doi: 10.1109/tpds.2017.2754482 . ISSN   1045-9219.
  2. Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. “A Sorting Classification of Parallel Rendering.” IEEE Computer Graphics and Algorithms, pages 23-32, July 1994.
  3. Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. “A Sorting Classification of Parallel Rendering.” IEEE Computer Graphics and Algorithms, pages 23-32, July 1994.
  4. "Equalizer: Parallel Rendering". Archived from the original on 2008-05-11. Retrieved 2020-04-30.
  5. "OpenSG". Archived from the original on 2017-08-06. Retrieved 2020-04-30.
  6. "Golem Network". golem.network. Retrieved 2021-05-16.