Computer-generated holography (CGH) is a technique that uses computer algorithms to generate holograms. It involves generating holographic interference patterns. A computer-generated hologram can be displayed on a dynamic holographic display, or it can be printed onto a mask or film using lithography. [1] When a hologram is printed onto a mask or film, it is then illuminated by a coherent light source to display the holographic images.
The term "computer-generated holography" has become used to denote the whole process chain of synthetically preparing holographic light wavefronts suitable for observation. [2] [3] If holographic data of existing objects is generated optically and recorded and processed digitally, and subsequently displayed, this is termed CGH as well.
Compared to classical holograms, computer-generated holograms have the advantage that the objects that one wants to show do not have to possess any physical reality, and can be completely synthetically generated.
Ultimately, computer-generated holography might expand upon all the roles of current computer-generated imagery. Holographic computer displays might be used for a wide range of applications, for example computer-aided design (CAD), gaming, and holographic video.
Holography is a technique originally invented by Hungarian physicist Dennis Gabor (1900–1979) to improve the resolving power on electron microscopes. An object is illuminated with a coherent (usually monochromatic) light beam; the scattered light is brought to interference with a reference beam of the same source, recording the interference pattern. CGH as defined in the introduction has broadly three tasks:
Note that it is not always justified to make a strict distinction between these steps; however it helps the discussion to structure it in this way.
Computer generated holograms offer important advantages over optical holograms since there is no need for a real object. Because of this breakthrough, a three-dimensional display was expected when the first algorithms were reported at 1966. [4]
Unfortunately, the researchers soon realized that there are noticeable lower and upper bounds in terms of computational speed and image quality and fidelity respectively. Wavefront calculations are computationally very intensive; even with modern mathematical techniques and high-end computing equipment, real-time computation is tricky. There are many different methods for calculating the interference pattern for a CGH. In the following 25 years, many methods for computer-generated holograms were proposed in the fields of holographic information and computational reduction as well as in computational and quantization techniques. [5] [6] [7] [8] [9] [10] [11] The algorithms can be categorized in two main concepts: Fourier transform holograms and point source holograms.
One of the more prevalent methods that can be used to generate phase-only holograms is the Gerchberg-Saxton (GS) algorithm. [12] [13]
In the first one, the Fourier transformation is used to simulate the propagation of each plane of depth of the object to the hologram plane. The Fourier transformation concept was first introduced by Byron R. Brown and Adolf W. Lohmann [4] with the detour phase method leading to cell oriented holograms. A coding technique suggested by Burch [14] replaced the cell oriented holograms by point holograms and made this kind of computer generated holograms more attractive. In a Fourier Transform hologram the reconstruction of the image occurs in the far field. This is usually achieved by using the Fourier transforming properties of a positive lens for reconstruction. So there are two steps in this process: computing the light field in the far observer plane, and then Fourier transforming this field back to the lens plane. These holograms are called Fourier Based Holograms. First CGHs based on the Fourier transform could reconstruct only 2D images. Brown and Lohmann [15] introduced a technique to calculate computer generated holograms of 3D objects. Calculation of the light propagation from three-dimensional objects is performed according to the usual parabolic approximation to the Fresnel-Kirchhoff diffraction integral. The wavefront to be reconstructed by the hologram is, therefore, the superposition of the Fourier transforms of each object plane in depth, modified by a quadratic phase factor.
The second computational strategy is based on the point source concept, where the object is broken down in self-luminous points. An elementary hologram is calculated for every point source and the final hologram is synthesized by superimposing all the elementary holograms. This concept has been first reported by Waters [16] whose major assumption originated with Rogers [17] who recognized that a Fresnel zone plate could be considered a special case of the hologram proposed by Gabor. But, as far as most of the object points were non-zero, the computational complexity of the point-source concept was much higher than in the Fourier transformation concept. Some researchers tried to overcome this drawback by predefining and storing all the possible elementary holograms using special data storage techniques [18] because of the huge capacity that is needed in this case, others by using special hardware. [19]
In the point-source concept the major problem is the trade-off between data storage capacity and computational speed. In particular, algorithms that increase computational speed usually have much greater data storage requirements [18] while algorithms that reduce data storage requirements have high computational complexity [20] [21] [22] (though some optimizations are possible [23] ).
Another concept which leads to point source CGHs is the ray tracing method. Ray tracing is perhaps the simplest method of computer generated holography to visualize. Essentially, the path length difference between the distance a virtual "reference beam" and a virtual "object beam" have to travel is calculated; this will give the relative phase of the scattered object beam.
Over the last three decades, both concepts have made remarkable progress improving computational speed and image quality. However, some technical restraints, like computation and storage capacity, still burden digital holography which makes real-time applications almost impossible with current standard computer hardware.
Once it is known what the scattered wavefront of the object looks like or how it may be computed, it must be fixed on a spatial light modulator (SLM), abusing this term to include not only LCD displays or similar devices, but also films and masks. Basically, there are different types of SLMs available: Pure phase modulators (retarding the illuminating wave), pure amplitude modulators (blocking the illumination light), polarization modulators (influencing the polarization state of light) [24] and SLMs which have the capability of combined phase/amplitude modulation. [25]
In the case of pure phase or amplitude modulation, clearly quality losses are unavoidable. Early forms of pure amplitude holograms were simply printed in black and white, meaning that the amplitude had to be encoded with one bit of depth only. [4] Similarly, the kinoform is a pure-phase encoding invented at IBM in the early days of CGH. [26]
Even if a fully complex phase/amplitude modulation would be ideal, a pure phase or pure amplitude solution is normally preferred because it is much easier to implement technologically. Nevertheless, for the creation of complicated light distribution simultaneous modulation of amplitude and phase is reasonable. So far two different approaches for amplitude-phase-modulation have been implemented. One is based on phase-only or amplitude-only modulation and consecutive spatial filtering, [27] the other one is based on polarization holograms with variable orientation and magnitude of local birefringence. [28] Holograms with a constraint, such as phase-only or amplitude-only, may be computed via algorithms such as the Gerchberg-Saxton algorithm or more general optimisation algorithms such as direct search, simulated annealing [29] or stochastic gradient descent using, for example, TensorFlow. [30]
The third (technical) issue is beam modulation and actual wavefront reconstruction. Masks may be printed, resulting often in a grained pattern structure since most printers can make only dots (although very small ones). Films may be developed by laser exposure. Holographic displays are currently yet a challenge (as of 2008 [update] ), although successful prototypes have been built. An ideal display for computer generated holograms would consist of pixels smaller than a wavelength of light with adjustable phase and brightness. Such displays have been called phased array optics. [31] Further progress in nanotechnology is required to build them.
Currently, several companies and university departments are researching on the field of CGH devices:
Recently computer-generated holography has been extended in its usage beyond light optics, and applied in generating structured electron wavefunctions with a desired amplitude and phase profile. The computer generated holograms are designed by the interference of a target wave with a reference wave, which could be, e.g. a plane-like wave slightly tilted in one direction. The holographic diffractive optical elements used are usually constructed out of thin membranes of materials such as silicon nitride.
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interferometry. In principle, it is possible to make a hologram for any type of wave.
Interferometry is a technique which uses the interference of superimposed waves to extract information. Interferometry typically uses electromagnetic waves and is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy, quantum mechanics, nuclear and particle physics, plasma physics, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, optometry, and making holograms.
Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.
A spatial light modulator (SLM) is a device that can control the intensity, phase, or polarization of light in a spatially varying manner. A simple example is an overhead projector transparency. Usually when the term SLM is used, it means that the transparency can be controlled by a computer.
An optical neural network (ONN) is a physical implementation of an artificial neural network with optical components.
Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.
Optical computing or photonic computing uses light waves produced by lasers or incoherent sources for data processing, data storage or data communication for computing. For decades, photons have shown promise to enable a higher bandwidth than the electrons used in conventional computers.
The Gerchberg–Saxton (GS) algorithm is an iterative phase retrieval algorithm for retrieving the phase of a complex-valued wavefront from two intensity measurements acquired in two different planes. Typically, the two planes are the image plane and the far field (diffraction) plane, and the wavefront propagation between these two planes is given by the Fourier transform. The original paper by Gerchberg and Saxton considered image and diffraction pattern of a sample acquired in an electron microscope.
Holographic interferometry (HI) is a technique which enables the measurements of static and dynamic displacements of objects with optically rough surfaces at optical interferometric precision. These measurements can be applied to stress, strain and vibration analysis, as well as to non-destructive testing and radiation dosimetry. It can also be used to detect optical path length variations in transparent media, which enables, for example, fluid flow to be visualised and analyzed. It can also be used to generate contours representing the form of the surface.
Digital holography is the acquisition and processing of holograms with a digital sensor array, typically a CCD camera or a similar device. Image rendering, or reconstruction of object data is performed numerically from digitized interferograms. Digital holography offers a means of measuring optical phase data and typically delivers three-dimensional surface or optical thickness images. Several recording and processing schemes have been developed to assess optical wave characteristics such as amplitude, phase, and polarization state, which make digital holography a very powerful method for metrology applications .
Phased-array optics is the technology of controlling the phase and amplitude of light waves transmitting, reflecting, or captured (received) by a two-dimensional surface using adjustable surface elements. An optical phased array (OPA) is the optical analog of a radio-wave phased array. By dynamically controlling the optical properties of a surface on a microscopic scale, it is possible to steer the direction of light beams, or the view direction of sensors, without any moving parts. Phased-array beam steering is used for optical switching and multiplexing in optoelectronic devices and for aiming laser beams on a macroscopic scale.
Interferometric microscopy or imaging interferometric microscopy is the concept of microscopy which is related to holography, synthetic-aperture imaging, and off-axis-dark-field illumination techniques. Interferometric microscopy allows enhancement of resolution of optical microscopy due to interferometric (holographic) registration of several partial images and the numerical combining.
Digital holographic microscopy (DHM) is digital holography applied to microscopy. Digital holographic microscopy distinguishes itself from other microscopy methods by not recording the projected image of the object. Instead, the light wave front information originating from the object is digitally recorded as a hologram, from which a computer calculates the object image by using a numerical reconstruction algorithm. The image forming lens in traditional microscopy is thus replaced by a computer algorithm. Other closely related microscopy methods to digital holographic microscopy are interferometric microscopy, optical coherence tomography and diffraction phase microscopy. Common to all methods is the use of a reference wave front to obtain amplitude (intensity) and phase information. The information is recorded on a digital image sensor or by a photodetector from which an image of the object is created (reconstructed) by a computer. In traditional microscopy, which do not use a reference wave front, only intensity information is recorded and essential information about the object is lost.
Specular holography is a technique for making three dimensional imagery by controlling the motion of specular glints on a two-dimensional surface. The image is made of many specularities and has the appearance of a 3D surface-stippling made of dots of light. Unlike conventional wavefront holograms, specular holograms do not depend on wave optics, photographic media, or lasers.
A common-path interferometer is a class of interferometers in which the reference beam and sample beams travel along the same path. Examples include the Sagnac interferometer, Zernike phase-contrast interferometer, and the point diffraction interferometer. A common-path interferometer is generally more robust to environmental vibrations than a "double-path interferometer" such as the Michelson interferometer or the Mach–Zehnder interferometer. Although travelling along the same path, the reference and sample beams may travel along opposite directions, or they may travel along the same direction but with the same or different polarization.
As described here, white light interferometry is a non-contact optical method for surface height measurement on 3D structures with surface profiles varying between tens of nanometers and a few centimeters. It is often used as an alternative name for coherence scanning interferometry in the context of areal surface topography instrumentation that relies on spectrally-broadband, visible-wavelength light.
Quantitative phase contrast microscopy or quantitative phase imaging are the collective names for a group of microscopy methods that quantify the phase shift that occurs when light waves pass through a more optically dense object.
Optical holography is a technique which enables an optical wavefront to be recorded and later re-constructed. Holography is best known as a method of generating three-dimensional images but it also has a wide range of other applications.
The time-domain counterpart of spatial holography is called time-domain holography. In other words, the principles of spatial holography is surveyed in time domain. Time-domain holography was inspired by the theory known as space-time duality which was introduced by Brian H. Kolner in 1994.
Joseph Rosen is the Benjamin H. Swig Professor in Optoelectronics at the School of Electrical & Computer Engineering of Ben-Gurion University of the Negev, Israel.