Shader lamps

Last updated

Shader lamps is a computer graphic technique used to change the appearance of physical objects. The still or moving objects are illuminated, using one or more video projectors, by static or animated texture or video stream. The method was invented at University of North Carolina at Chapel Hill by Ramesh Raskar, Greg Welch, Kok-lim Low and Deepak Bandyopadhyay in 1999 as a follow on to Spatial Augmented Reality also invented at University of North Carolina at Chapel Hill in 1998 by Ramesh Raskar, Greg Welch and Henry Fuchs.

A 3D graphic rendering software is typically used to compute the deformation caused by the non perpendicular, non-planar or even complex projection surface.

Rendering (computer graphics) The process of generating an image from a model

Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of computer programs. Also, the results of displaying such a model can be called a render. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.

Complex objects (or aggregation of multiple simple objects) create self shadows that must be compensated by using several projectors.

The objects are typically replaced by neutral color ones, the projection giving all its visual properties, thus the name shader lamps.

The technique can be used to create a sense of invisibility, by rendering transparency. The object is illuminated not by a replacement of its own visual properties, but by the corresponding visual surface placed behind the object as seen from an arbitrary viewing point.

See also


Related Research Articles

Augmented reality View of the real world with computer-generated supplementary features

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. The overlaid sensory information can be constructive, or destructive. This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

Shading depicting depth through varying levels of darkness

Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.

Overhead projector device that projects a transparent image

An overhead projector (OHP) is a variant of slide projector that is used to display images to an audience..

LCD projector

An LCD projector is a type of video projector for displaying video, images or computer data on a screen or other flat surface. It is a modern equivalent of the slide projector or overhead projector. To display images, LCD projectors typically send light from a metal-halide lamp through a prism or series of dichroic filters that separates light to three polysilicon panels – one each for the red, green and blue components of the video signal. As polarized light passes through the panels, individual pixels can be opened to allow light to pass or closed to block the light. The combination of open and closed pixels can produce a wide range of colors and shades in the projected image.

In 3D computer graphics, shown-surface determination is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. The analogue for line rendering is hidden-line removal. Hidden-surface determination is necessary to render an image correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

Digital Light Processing display device

Digital Light Processing (DLP) is a set of chipsets based on optical micro-electro-mechanical technology that uses a digital micromirror device. It was originally developed in 1987 by Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP-based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for the DLP projector technology. DLP is used in a variety of display applications from traditional static displays to interactive displays and also non-traditional embedded applications including medical, security, and industrial uses.

A stereo display is a display device capable of conveying depth perception to the viewer by means of stereopsis for binocular vision.

Page layout part of graphic design that deals in the arrangement of visual elements on a page

Page layout is the part of graphic design that deals in the arrangement of visual elements on a page. It generally involves organizational principles of composition to achieve specific communication objectives.

Volume ray casting, sometimes called volumetric ray casting, volumetric ray tracing, or volume ray marching, is an image-based volume rendering technique. It computes 2D images from 3D volumetric data sets. Volume ray casting, which processes volume data, must not be mistaken with ray casting in the sense used in ray tracing, which processes surface data. In the volumetric variant, the computation doesn't stop at the surface but "pushes through" the object, sampling the object along the ray. Unlike ray tracing, volume ray casting does not spawn secondary rays. When the context/application is clear, some authors simply call it ray casting. Because raymarching does not necessarily require an exact solution to ray intersection and collisions, it is suitable for real time computing for many applications for which ray tracing is unsuitable.

3D rendering

3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic rendering.

form·Z is a computer-aided (CAD) design tool developed by AutoDesSys for all design fields that deal with the articulation of 3D spaces and forms and which is used for 3D modeling, drafting, animation and rendering.

3D computer graphics graphics that use a three-dimensional representation of geometric data

3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

Projection mapping projection technology used to turn objects, often irregularly shaped, into a display surface for video projection

Projection mapping, similar to video mapping and spatial augmented reality, is a projection technique used to turn objects, often irregularly shaped, into a display surface for video projection. These objects may be complex industrial landscapes, such as buildings, small indoor objects or theatrical stages. By using specialized software, a two- or three-dimensional object is spatially mapped on the virtual program which mimics the real environment it is to be projected on. The software can interact with a projector to fit any desired image onto the surface of that object. This technique is used by artists and advertisers alike who can add extra dimensions, optical illusions, and notions of movement onto previously static objects. The video is commonly combined with, or triggered by, audio to create an audio-visual narrative.

Multi-image

Multi-image is the now largely obsolete practice and business of using 35mm slides (diapositives) projected by single or multiple slide projectors onto one or more screens in synchronization with an audio voice-over or music track. Multi-image productions are also known as multi-image slide presentations, slide shows and diaporamas and are a specific form of multimedia or audio-visual production.

IllumiRoom

IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.

Oliver Bimber German computer scientist

Oliver Bimber is a German computer scientist. He is professor for computer graphics at the Johannes Kepler University Linz, Austria where he heads the Institute of Computer Graphics.

Visual computing is a generic term for all computer science disciplines handling with images and 3D models, i.e. computer graphics, image processing, visualization, computer vision, virtual and augmented reality, video processing, but also includes aspects of pattern recognition, human computer interaction, machine learning and digital libraries. The core challenges are the acquisition, processing, analysis and rendering of visual information. Application areas include industrial quality control, medical image processing and visualization, surveying, robotics, multimedia systems, virtual heritage, special effects in movies and television, and computer games.

Physically based rendering

Physically based rendering (PBR) is an approach in computer graphics that seeks to render graphics in a way that more accurately models the flow of light in the real world. Many PBR pipelines have the accurate simulation of photorealism as their goal. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. Shaders may be used to implement PBR principles.