Gonioreflectometer

Last updated
Traditional non-image based lab gonioreflectomer with goniometric arms for positioning the light source and detector. Depicts sample and calibration coupon stages on moveable platform. Traditional Gonioreflectometer.jpg
Traditional non-image based lab gonioreflectomer with goniometric arms for positioning the light source and detector. Depicts sample and calibration coupon stages on moveable platform.
The University of Virginia spherical gantry, an example of a modern image-based gonioreflectometer Gantry2.jpg
The University of Virginia spherical gantry, an example of a modern image-based gonioreflectometer

A gonioreflectometer is a device for measuring a bidirectional reflectance distribution function (BRDF).

The device consists of a light source illuminating the material to be measured and a sensor that captures light reflected from that material. The light source should be able to illuminate and the sensor should be able to capture data from a hemisphere around the target. The hemispherical rotation dimensions of the sensor and light source are the four dimensions of the BRDF. The 'gonio' part of the word refers to the device's ability to measure at different angles.

Several similar devices have been built and used to capture data for similar functions. Most of these devices use a camera instead of the light intensity-measuring sensor to capture a two-dimensional sample of the target. Examples include:

Related Research Articles

Rendering (computer graphics) Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.

Volume rendering 3D rendering techniques

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by the radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. The phrase light field was coined by Andrey Gershun in a classic paper on the radiometric properties of light in three-dimensional space (1936).

Bidirectional texture function (BTF) is a 6-dimensional function depending on planar texture coordinates (x,y) as well as on view and illumination spherical angles. In practice this function is obtained as a set of several thousand color images of material sample taken during different camera and light positions.

Subsurface scattering mechanism of light transport

Subsurface scattering (SSS), also known as subsurface light transport (SSLT), is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface. Subsurface scattering is important for realistic 3D computer graphics, being necessary for the rendering of materials such as marble, skin, leaves, wax and milk. If subsurface scattering is not implemented, the material may look unnatural, like plastic or metal.

Bidirectional reflectance distribution function term

The bidirectional reflectance distribution function is a function of four real variables that defines how light is reflected at an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms. The function takes an incoming light direction, , and outgoing direction, , and returns the ratio of reflected radiance exiting along to the irradiance incident on the surface from direction . Each direction is itself parameterized by azimuth angle and zenith angle , therefore the BRDF as a whole is a function of 4 variables. The BRDF has units sr−1, with steradians (sr) being a unit of solid angle.

Path tracing

Path tracing is a computer graphics Monte Carlo method of rendering images of three-dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function (BRDF) to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources, and optically correct cameras, path tracing can produce still images that are indistinguishable from photographs.

3D scanning analyzing a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour)

3D scanning is the process of analyzing a real-world object or environment to collect data on its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.

Light-field camera Type of camera that can also capture the direction of travel of light rays

A light field camera, also known as plenoptic camera, captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional camera, which records only light intensity.

3D rendering process of converting 3D models into 2D images on a computer

3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.

Bidirectional scattering distribution function term

The definition of the BSDF is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice, this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF and BTDF.

Marc Levoy American computer graphics researcher

Marc Levoy is a computer graphics researcher and Professor Emeritus of Computer Science and Electrical Engineering at Stanford University, a Vice President and Fellow at Adobe Inc., and a Distinguished Engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography.

Pat Hanrahan American computer graphics researcher

Patrick M. Hanrahan is an American computer graphics researcher, the Canon USA Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization. He has received numerous awards, including the 2019 Turing Award.

Range imaging is the name for a collection of techniques that are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.

3D reconstruction reconscruction

In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.

Photometric stereo

Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under different lighting conditions. It is based on the fact that the amount of light reflected by a surface is dependent on the orientation of the surface in relation to the light source and the observer. By measuring the amount of light reflected into a camera, the space of possible surface orientations is limited. Given enough light sources from different angles, the surface orientation may be constrained to a single orientation or even overconstrained.

A light stage or light cage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.

This is a glossary of terms relating to computer graphics.

Physically based rendering Computer graphics technique

Physically based rendering (PBR) is an approach in computer graphics that seeks to render graphics in a way that more accurately models the flow of light in the real world. Many PBR pipelines have the accurate simulation of photorealism as their goal. Feasible and quick approximations of the bidirectional reflectance distribution function and rendering equation are of mathematical importance in this field. Photogrammetry may be used to help discover and encode accurate optical properties of materials. Shaders may be used to implement PBR principles.

References