This article needs additional citations for verification .(May 2016) |
Self-Shadowing is a computer graphics lighting effect, used in 3D rendering applications such as computer animation and video games. Self-shadowing allows non-static objects in the environment, such as game characters and interactive objects (buckets, chairs, etc.), to cast shadows on themselves and each other. For example, without self-shadowing, if a character puts their right arm over the left, the right arm will not cast a shadow over the left arm. If that same character places a hand over a ball, that hand will cast a shadow over the ball.
One thing that needs to be specified is whether the shadow being cast is dynamic or static. A wall with a shadow on it is a static shadow. The wall is not moving and so its geometric shape is not going to move or change in the scene. A dynamic shadow is something that has its geometry changes within a scene.
Self-Shadowing methods have trade-offs between quality and speed depending on the desired result. To keep speed up, some techniques rely on fast and low resolution solutions which could result in wrong looking shadows which may be out of place in a scene. Others require the CPU and GPU to calculate with algorithms the exact location and shape of a shadow with a high level of accuracy. This requires a lot of computational overhead, which older machines could not handle.
A technique was created where a shadow on a rough surface can be calculated quickly by finding the high points along from the light source's origin and ignoring any other geometric points underneath the peaks. Imagine a sunrise in the mountains where the light hits a peak behind you but you are still in the dark. The computer wouldn’t need to worry about you needing a shadow or light since you are below the peak behind you. “Height Field Self-Shadowing” renders self-shadows on dynamic height fields under dynamic light environments in real time. [1]
Self-Shadowing can be used for interactive hair animation, which is normally very difficult for computers to render due to the major increase in individual geometric shapes that hair can take. Self shadowing is a major part of a 3d application that contributes to the impression of volume. [2]
Shadow volume is one way that self-shadowing can be used in a 3D image or scene. The method basically makes a 3D object occupy an enclosed volume in a scene where a shadow is being cast. This allows the renderer, or shader, to perform an analysis on whether or not the point or pixel is inside a shadowed area. This eventually allows the program to determine how the object will be lit.
3D shadow mapping is another method which creates approximate shadows from a set position to create very diffuse shadows that may not be entirely accurate.
Chris Green of Valve, a video game maker, says that bump map data is derived from geometric descriptions of the objects surface significant lighting cues due to lighting occlusion by surface details are not calculated. [3] A common fix is to use an additional texture channel to create an ambient occlusion field. This only provides a darkening effect that is not connected to the direction of the light source on acting on the surface. [3]
Shadow volume was proposed by Frank Crow in 1977. [4] The advantage of a shadow volume was that is could be used to shadow everything, including itself.
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source, but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not.
In 3D computer graphics, radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, which handle all types of light paths, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm in the sense that the illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light. Radiosity is viewpoint independent, which increases the calculations involved, but makes them useful for all viewpoints.
In 3-D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.
Bump mapping is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978.
Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not.
Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.
In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.
A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. Lightmaps are most commonly applied to static objects in applications that use real-time 3D computer graphics, such as video games, in order to provide lighting effects such as global illumination at a relatively low computational cost.
In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes.
Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations.
Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.
In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.
In computer graphics, relief mapping is a texture mapping technique first introduced in 2000 used to render the surface details of three-dimensional objects accurately and efficiently. It can produce accurate depictions of self-occlusion, self-shadowing, and parallax. It is a form of short-distance ray tracing done in a pixel shader. Relief mapping is highly comparable in both function and approach to another displacement texture mapping technique, Parallax occlusion mapping, considering that they both rely on ray tracing, though the two are not to be confused with each other, as parallax occlusion mapping uses reverse heightmap tracing.
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
In 3D computer graphics rendering, a hemicube is one way to represent a 180° view from a surface or point in space.
3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.
Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
This is a glossary of terms relating to computer graphics.