HoloVID

Last updated

HoloVID is a measuring instrument, originally developed by Mark Slater for the holographic dimensional measurement of the internal isogrid structural webbing of the Delta family of launch vehicles in 1981.

Contents

History

Delta launch vehicles were produced by McDonnell Douglas Astronautics until the line was purchased by Boeing. Milled out of T6 Aluminum on 40-by-20-foot (12 by 6 m) horizontal mills, the inspection of the huge sheets took longer than the original manufacturing. It was estimated that a real time in situ inspection device could cut costs so an Independent Research and Development (IRAD) budget was generated to solve the problem. Two solutions were worked simultaneously by Mark Slater: a photo-optical technique utilizing a holographic lens and an ultrasonic technique utilizing configurable micro-transducer multiplexed arrays.

A pair of HoloVIDs for simultaneous frontside and backside weld feedback was later used at Martin Marietta to inspect the long weld seams which hold the External Tanks of the Space Shuttle together. By controlling the weld bead profile in real time as it was TIG generated, an optimum weight vs. performance ratio could be obtained, saving the rocket engines from having to waste thrust energy while guaranteeing the highest possible web strengths.

Usage

Many corporations (Kodak, Immunex, Boeing, Johnson & Johnson, The Aerospace Corporation, Silverline Helicopters, and others) use customized versions of the Six Dimensional Non-Contact Reader w/ Integrated Holographic Optical Processing for applications from supercomputer surface mount pad assessment to genetic biochemical assay analysis.

Specifications

HoloVID belongs to a class of sensor known as a structured-light 3D scanner device. The use of structured light to extract three-dimensional shape information is a well known technique. [1] [2] The use of single planes of light to measure the distance and orientation of objects has been reported several times. [3] [4] [5]

The use of multiple planes [6] [7] [8] and multiple points [9] [10] of light to measure shapes and construct volumetric estimates of objects has also been widely reported. [11]

The use of segmented phase holograms to selectively deflect portions of an image wavefront is unusual. The holographic optical components used in this device split tessellated segments of a returning wave front in programmable bulk areas and shaped patches to achieve a unique capability, increasing both the size of an object which can be read and the z-axis depth per point which is measurable, while also increasing the simultaneous operations possible, which is a significant advance in the previous state of art.

Operational modes

A laser beam is made to impinge onto a target surface. The angle of the initially nonlinear optical field can be non-orthogonal to the surface. This light beam is then reflected by the surface in a wide conical spread function which is geometrically related to the incidence angle, light frequency, wavelength and relative surface roughness. A portion of this reflected light enters the optical system coaxially, where a 'stop' shadows the edges. In a single point reader, this edge is viewed along a radius by a photodiode array.

The output of this device is a boxcar output where the photodiodes are sequentially lit diode-by-diode as the object distance changes in relation to the sensor, until either no diodes are lit or all diodes are lit. The residual product charge dynamic value in each light diode cell is a function of the bias current, the dark current and the incident ionizing radiation (in this case, the returning laser light).

In the multipoint system, the HoloVID, the cursor point is acousto-optically scanned in the x-axis across a monaxial transformer. A monaxial holographic lens collects the wave front and reconstructs the pattern onto the single dimensional photodiode array and a two dimensional matrix sensor. Image processing of the sensor data derives the correlation between the compressed wave front and the actual physical object.

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Charge-coupled device</span> Device for the movement of electrical charge

A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.

<span class="mw-page-title-main">Photodiode</span> Converts light into current

A photodiode is a semiconductor diode sensitive to photon radiation, such as visible light, infrared or ultraviolet radiation, X-rays and gamma rays. Photodiode is a PN semiconductor material that produces current or voltage Photovoltaics when it absorbs photons Semiconductor Optoelectronics . The physics of electron excitation for photodiodes are similar to Photoconductivity typically implemented as a Photoresistor or as switches in Thyristor#Photothyristors. Photodiodes can be used for detection and measurement applications, or optimized for the generation of electrical power in solar cells. Photodiodes are used in a wide range of applications throughout the electromagnetic spectrum from IR, visible light, UV photocells to gamma ray spectrometers.

<span class="mw-page-title-main">Holography</span> Recording to reproduce a three-dimensional light field

Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interferometry. In principle, it is possible to make a hologram for any type of wave.

<span class="mw-page-title-main">Photonics</span> Technical applications of optics

Photonics is a branch of optics that involves the application of generation, detection, and manipulation of light in form of photons through emission, transmission, modulation, signal processing, switching, amplification, and sensing. Photonics is closely related to quantum electronics, where quantum electronics deals with the theoretical part of it while photonics deal with its engineering applications. Though covering all light's technical applications over the whole spectrum, most photonic applications are in the range of visible and near-infrared light. The term photonics developed as an outgrowth of the first practical semiconductor light emitters invented in the early 1960s and optical fibers developed in the 1970s.

<span class="mw-page-title-main">Vertical-cavity surface-emitting laser</span> Type of semiconductor laser diode

The vertical-cavity surface-emitting laser is a type of semiconductor laser diode with laser beam emission perpendicular from the top surface, contrary to conventional edge-emitting semiconductor lasers which emit from surfaces formed by cleaving the individual chip out of a wafer. VCSELs are used in various laser products, including computer mice, fiber optic communications, laser printers, Face ID, and smartglasses.

<span class="mw-page-title-main">Opto-isolator</span> Insulates two circuits from one another while allowing signals to pass through in one direction

An opto-isolator is an electronic component that transfers electrical signals between two isolated circuits by using light. Opto-isolators prevent high voltages from affecting the system receiving the signal. Commercially available opto-isolators withstand input-to-output voltages up to 10 kV and voltage transients with speeds up to 25 kV/μs.

<span class="mw-page-title-main">Single-photon avalanche diode</span> Solid-state photodetector

A single-photon avalanche diode (SPAD), also called Geiger-mode avalanche photodiode is a solid-state photodetector within the same family as photodiodes and avalanche photodiodes (APDs), while also being fundamentally linked with basic diode behaviours. As with photodiodes and APDs, a SPAD is based around a semi-conductor p-n junction that can be illuminated with ionizing radiation such as gamma, x-rays, beta and alpha particles along with a wide portion of the electromagnetic spectrum from ultraviolet (UV) through the visible wavelengths and into the infrared (IR).

<span class="mw-page-title-main">Photodetector</span> Sensors of light or other electromagnetic energy

Photodetectors, also called photosensors, are sensors of light or other electromagnetic radiation. There are a wide variety of photodetectors which may be classified by mechanism of detection, such as photoelectric or photochemical effects, or by various performance metrics, such as spectral response. Semiconductor-based photodetectors typically use a p–n junction that converts photons into charge. The absorbed photons make electron–hole pairs in the depletion region. Photodiodes and photo transistors are a few examples of photo detectors. Solar cells convert some of the light energy absorbed into electrical energy.

<span class="mw-page-title-main">Optical neural network</span>

An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.

<span class="mw-page-title-main">Electronic component</span> Discrete device in an electronic system

An electronic component is any basic discrete electronic device or physical entity part of an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components and elements. A datasheet for an electronic component is a technical document that provides detailed information about the component's specifications, characteristics, and performance.

<span class="mw-page-title-main">Coordinate-measuring machine</span> Device for measuring the geometry of objects

A coordinate-measuring machine (CMM) is a device that measures the geometry of physical objects by sensing discrete points on the surface of the object with a probe. Various types of probes are used in CMMs, the most common being mechanical and laser sensors, though optical and white light sensors do exist. Depending on the machine, the probe position may be manually controlled by an operator, or it may be computer controlled. CMMs typically specify a probe's position in terms of its displacement from a reference position in a three-dimensional Cartesian coordinate system. In addition to moving the probe along the X, Y, and Z axes, many machines also allow the probe angle to be controlled to allow measurement of surfaces that would otherwise be unreachable.

<span class="mw-page-title-main">Image sensor</span> Device that converts images into electronic signals

An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.

The following are common definitions related to the machine vision field.

<span class="mw-page-title-main">Active-pixel sensor</span> Image sensor, consisting of an integrated circuit

An active-pixel sensor (APS) is an image sensor, which was invented by Peter J.W. Noble in 1968, where each pixel sensor unit cell has a photodetector and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the now much more common complementary MOS (CMOS) APS, also known as the CMOS sensor. CMOS sensors are used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), mirrorless interchangeable-lens cameras (MILCs), and lensless imaging for cells.

<span class="mw-page-title-main">Time-of-flight camera</span> Range imaging camera system

A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.

<span class="mw-page-title-main">Spreeta</span>

Spreeta is an electro-optical device utilizing surface plasmon resonance to detect small changes in refractive index of liquids. The Spreeta device was developed by Texas Instruments, Inc. in the 1990s. Device design incorporates a light-emitting diode (LED) illuminating a thin metal film in the Kretchmann geometry. The reflected light is detected by a photodiode linear array and the resonance denotes the refractive index on the outer surface of the metal film. Applications include real-time measurement of binding of antigens to antibodies attached to the sensor surface, monitoring changes in oil quality, and measuring sugar content in drinks.

Sensors for arc welding are devices which – as a part of a fully mechanised welding equipment – are capable to acquire information about position and, if possible, about the geometry of the intended weld at the workpiece and to provide respective data in a suitable form for the control of the weld torch position and, if possible, for the arc welding process parameters.

<span class="mw-page-title-main">James R. Biard</span> American electrical engineer and inventor (1931–2022)

James Robert Biard was an American electrical engineer and inventor who held 73 U.S. patents. Some of his more significant patents include the first infrared light-emitting diode (LED), the optical isolator, Schottky clamped logic circuits, silicon Metal Oxide Semiconductor Read Only Memory, a low bulk leakage current avalanche photodetector, and fiber-optic data links. In 1980, Biard became a member of the staff of Texas A&M University as an Adjunct Professor of Electrical Engineering. In 1991, he was elected as a member into the National Academy of Engineering for contributions to semiconductor light-emitting diodes and lasers, Schotky-clamped logic, and read-only memories.

References

  1. Agin, Gerald J. (February 1979). "Real Time control of a Robot with a Mobile Camera" (Document). SRI International, Artificial Intelligence Center. Technical note 179.
  2. Bolles, Robert C.; Fischler, Martin A. (24 August 1981). "A RANSAC-based approach to model fitting and its application to finding cylinders in range data". Proceedings of the 7th International Joint Conference on Artificial Intelligence. Vol. 2. pp. 637–643.
  3. Posdamer, J. L.; Altschuler, M. D. (January 1982). "Surface Measurement by Space-encoded Projected Beam Systems". Computer Graphics and Image Processing. 18 (1): 1–17. doi:10.1016/0146-664X(82)90096-X.
  4. Popplestone, R. J.; Brown, C. M.; Ambler, A. P.; Crawford, G. F. (3 September 1975). "Forming Models of Plane-and-Cylinder Faceted Bodies from Light Stripes" (PDF). Proceedings of the 4th International Joint Conference on Artificial Intelligence. Vol. 1. pp. 664–668.
  5. Oshima, Masaki; Shirai, Yoshiaki (April 1983). "Object Recognition Using Three-Dimensional Information" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 5 (4): 353–361. doi:10.1109/TPAMI.1983.4767405. PMID   21869120. S2CID   17612273. Archived from the original (PDF) on 2016-10-19.
  6. Albus, J.; Kent, E.; Nashman, M.; Mansbach, P.; Palombo, L.; Shneier, M. (22 November 1982). "Six-Dimensional Vision System". In Rosenfeld, Azriel (ed.). Proceedings of the SPIE: Robot Vision. Vol. 0336. pp. 142–153. Bibcode:1982SPIE..336..142A. doi:10.1117/12.933622. S2CID   64868995.{{cite book}}: |journal= ignored (help)
  7. Okada, S. (1973). "Welding machine using shape detector". Mitsubishi-Denki-Giho (in Japanese). 47 (2): 157.
  8. Taenzer, Dave (1975). "Progress Report on Visual Inspection of Solder Joints" (Document). Massachusetts Institute of Technology, Artificial Intelligence Lab. Working Paper 96.
  9. Nakagawa, Yasuo (22 November 1982). "Automatic Visual Inspection Of Solder Joints On Printed Circuit Boards". In Rosenfeld, Azriel (ed.). Proceedings of the SPIE: Robot Vision. Vol. 0336. pp. 121–127. Bibcode:1982SPIE..336..121N. doi:10.1117/12.933619. S2CID   109280087.{{cite book}}: |journal= ignored (help)
  10. Duda, R. O.; Nitzan, D. (March 1976). "Low-level processing of registered range and intensity data" (Document). SRI International, Artificial Intelligence Center. Technical note 129.
  11. Nitzan, David; Brain, Alfred E.; Duda, Richard O. (February 1977). "The Measurement and Use of Registered Reflectance and Range Data in Scene Analysis". Proceedings of the IEEE. Vol. 65. pp. 206–220. doi:10.1109/PROC.1977.10458. S2CID   8234002.