This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations . (May 2013) (Learn how and when to remove this template message) |
Structured light is the process of projecting a known pattern (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners.
Invisible (or imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high frame rates alternating between two exact opposite patterns.
Structured light is used by a number of police forces for the purpose of photographing fingerprints in a 3D scene. Where previously they would use tape to extract the fingerprint and flatten it out, they can now use cameras and flatten the fingerprint digitally, which allows the process of identification to begin before the officer has even left the scene.
This technology-related article is a stub. You can help Wikipedia by expanding it. |
Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.
Lidar is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. It has terrestrial, airborne, and mobile applications.
Night vision is the ability to see in low-light conditions. Whether by biological or technological means, night vision is made possible by a combination of two approaches: sufficient spectral range, and sufficient intensity range. Humans have poor night vision compared to many animals, in part because the human eye lacks a tapetum lucidum.
Stereoscopy is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos) 'firm, solid', and σκοπέω (skopeō) 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
A barcode reader is an optical scanner that can read printed barcodes, decode the data contained in the barcode and send the data to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor translating for optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyze the barcode's image data provided by the sensor and sending the barcode's content to the scanner's output port.
Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing. Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.
3D scanning is the process of analyzing a real-world object or environment to collect data on its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.
The following are common definitions related to the machine vision field.
Structure from motion (SfM) is a photogrammetric range imaging technique for estimating three-dimensional structures from two-dimensional image sequences that may be coupled with local motion signals. It is studied in the fields of computer vision and visual perception. In biological vision, SfM refers to the phenomenon by which humans can recover 3D structure from the projected 2D (retinal) motion field of a moving object or scene.
Range imaging is the name for a collection of techniques that are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
A structured-light 3D scanner is a 3D scanning device for measuring the three-dimensional shape of an object using projected light patterns and a camera system.
A time-of-flight camera is a range imaging camera system that employs time-of-flight techniques to resolve distance between the camera and the subject for each point of the image, by measuring the round trip time of an artificial light signal provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers. The distance resolution is about 1 cm. The spatial resolution of time-of-flight cameras is generally low compared to standard 2D video cameras, with most commercially available devices at 320 × 240 pixels or less as of 2011. Compared to other 3D laser scanning methods for capturing 3D images, TOF cameras operate more quickly by providing up to 160 operations per second.
Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process Stereopsis.
Neptec Design Group is an Ottawa based, Canadian vision systems company, providing machine vision solutions for space, industrial, and military applications. Privately owned and founded in 1990, Neptec is a NASA prime contractor, supplying operational systems to both the Space Shuttle and International Space Station programs. Starting in 2000, Neptec began expanding its technology to include active 3D imaging systems and 3D processing software. This work led directly to the development of Neptec's Laser Camera System, which is an operational system used by NASA to inspect the shuttle's external surfaces during flight. Building on Laser Camera System technology, Neptec has also developed a 3D imaging and tracking system designed for automated on-orbit rendezvous, inspection and docking. The TriDAR combines a high precision, short range triangulation sensor with a long range LIDAR sensor into the same optical path.
In 3D computer graphics and computer vision, a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related to and may be analogous to depth buffer, Z-buffer, Z-buffering and Z-depth. The "Z" in these latter terms relates to a convention that the central axis of view of a camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene.
This is a glossary of terms relating to computer graphics.
The Sony Xperia XZ1 is an Android smartphone manufactured and marketed by Sony. Part of the Xperia X series, the device was announced to the public along with the Xperia XZ1 Compact at the annual IFA 2017 on August 31, 2017. It is the direct successor to the Sony Xperia XZ according to Sony, and is the latest flagship after the Xperia XZ Premium.
Volumetric video is a technique that captures a three-dimensional space, such as a location or performance. This type of volumography acquires data that can be viewed on flat screens as well as using 3D displays and VR goggles. Consumer-facing formats are numerous and the required motion capture techniques lean on computer graphics, photogrammetry, and other computation-based methods. The viewer generally experiences the result in a real-time engine and has direct input in exploring the generated volume.
Zivid is a Norwegian machine vision technology company headquartered in Oslo, Norway. It designs and sells 3D color cameras with vision software that are used in autonomous industrial robot cells, collaborative robot (cobot) cells and other industrial automation systems.