A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.
One type uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type. A holographic image is a type of film-based light field image.
The first light field camera was proposed by Gabriel Lippmann in 1908. He called his concept "integral photography". Lippmann's experimental results included crude integral photographs made by using a plastic sheet embossed with a regular array of microlenses, or by partially embedding small glass beads, closely packed in a random pattern, into the surface of the photographic emulsion.
In 1992, Adelson and Wang proposed a design that reduced the correspondence problem in stereo matching. [1] To achieve this, an array of microlenses is placed at the focal plane of the camera main lens. The image sensor is positioned slightly behind the microlenses. Using such images, the displacement of image parts that are not in focus can be analyzed and depth information can be extracted.
The "standard plenoptic camera" is a mathematical model used by researchers to compare designs. By definition it has microlenses placed one focal length away from the image plane of a sensor. [2] [3] [4] In 2004, a team at Stanford University Computer Graphics Laboratory used a 16-megapixel camera to demonstrate that pictures can be refocused after they are taken. The system used a 90,000-microlens array, yielding a resolution of 90 kilopixels. [2] Research has shown that its maximum baseline is confined to the main lens entrance pupil size which is small relative to stereoscopic setups. [1] [5] This implies that the "standard plenoptic camera" may be intended for close-range applications as it exhibits increased depth resolution at distances that can be metrically predicted based on the camera's parameters. [6]
Lumsdaine and Georgiev described a design in which the microlens array can be positioned before or behind the focal plane of the main lens. This modification samples the light field in a way that trades angular resolution for higher spatial resolution. With this design, images can be refocused with a much higher spatial resolution than images from a standard plenoptic camera. However, the lower angular resolution can introduce aliasing artifacts.
A design that used a low-cost printed film mask instead of a microlens array was proposed in 2007. [7] This design reduces the chromatic aberrations and loss of boundary pixels seen in microlens arrays, and allows greater spatial resolution. However, the mask-based design reduces the amount of light that reaches the image sensor, reducing brightness.
Features include:
In 2022, NIST announced a device with a focal range of 3 cm (1.2 in) to 1.7 km (1.1 mi). The device employed a 39x39-element titanium dioxide metalens array. Each metalens is either right- or left-circle polarized to create a different focal length. Each metalens was rectangular in shape. The light is routed separately through the shorter and longer sides of the rectangle, producing two focal points in the image. Differences among the metalenses were corrected algorithmically. [14] [15]
Lytro was founded by Stanford University Computer Graphics Laboratory alumnus Ren Ng to commercialize the light field camera he developed as a graduate student. [16] Lytro's light field sensor uses an array of micro-lenses placed in front of an otherwise conventional image sensor; to sense intensity, color, and directional information. [17] Software then uses this data to create displayable 2D or 3D images. [18] Lytro trades maximum 2D resolution, at a given distance, for enhanced resolution at other distances. Users can convert the Lytro camera's proprietary image into a regular 2D image file, at any desired focal distance. The maximum Illum 2D resolution is 2450 × 1634 (4.0 megapixels), The 3D light field resolution is 40 "megarays". [19] It has a maximum 2D resolution of 1080 × 1080 pixels (roughly 1.2 megapixels), [20] Lytro ceased operations in March 2018. [21]
Raytrix has offered several models of plenoptic cameras for industrial and scientific applications since 2010, with field of view starting from 1 megapixel. [22] [23]
d'Optron and Rebellion Photonics offer plenoptic cameras, specializing in microscopy and gas leak detection, respectively. [ citation needed ]
Stanford University Computer Graphics Laboratory developed a prototype light field microscope using a microlens array similar to the one used in their light field camera. The prototype is built around a Nikon Eclipse transmitted light microscope/wide-field fluorescence microscope and standard CCD cameras. Light field capture is obtained by a module containing a microlens array and other optical components placed in the light path between the objective lens and camera, with the final multifocused image rendered using deconvolution. [24] [25] [26]
A later prototype added a light field illumination system consisting of a video projector (allowing computational control of illumination) and a second microlens array in the illumination light path of the microscope. The addition of a light field illumination system both allowed for additional types of illumination (such as oblique illumination and quasi-dark-field) and correction for optical aberrations. [25]
The Adobe light field camera is a prototype 100-megapixel camera that takes a three-dimensional photo of the scene in focus using 19 uniquely configured lenses. Each lens takes a 5.2-megapixel photo of the scene. Each image can be focused later in any way. [27]
CAFADIS is a plenoptic camera developed by University of La Laguna (Spain). [28] CAFADIS stands (in Spanish) for phase-distance camera, since it can be used for distance and optical wavefront estimation. From a single shot it can produce images focused at different distances, depth maps, all-in-focus images and stereo pairs. A similar optical design can be used in adaptive optics in astrophysics.
Mitsubishi Electric Research Laboratories's (MERL) light field camera [7] is based on the principle of optical heterodyning and uses a printed film (mask) placed close to the sensor. Any hand-held camera can be converted into a light field camera using this technology by simply inserting a low-cost film on top of the sensor. [29] A mask-based design avoids the problem of loss of resolution, since a high-resolution photo can be generated for the focused parts of the scene.
Pelican Imaging has thin multi-camera array systems intended for consumer electronics. Pelican's systems use from 4 to 16 closely spaced micro-cameras instead of a micro-lens array image sensor. [30] Nokia invested in Pelican Imaging to produce a plenoptic camera system with 16-lens array that was expected to be implemented in Nokia smartphones in 2014. [31] Pelican moved to designing supplementary cameras that add depth-sensing capabilities to a device's main camera, rather than stand-alone array cameras. [32]
A collaboration between University of Bedfordshire and ARRI resulted in a custom-made plenoptic camera with a ray model for the validation of light-field geometries and real object distances. [4] [5]
In November 2021 the German-based company K|Lens [33] announced the first light field lens available for any standard lens mount on Kickstarter. The project was canceled in January of 2022.
The modification of standard digital cameras requires little more than suitable sheets of micro-lens material, hence a number of hobbyists have produced cameras whose images can be processed to give either selective depth of field or direction information. [34]
In a 2017 study, researchers observed that incorporation of light field photographed images into an online anatomy module did not result in better learning outcomes compared to an identical module with traditional photographs of dissected cadavers. [35]
Plenoptic cameras are good for imaging fast-moving objects that outstrip autofocus capabilities, and for imaging objects where autofocus is not practical such as with security cameras. [36] A recording from a security camera based upon plenoptic technology could be used to produce an accurate 3D model of a subject. [37]
Lytro Desktop is a cross-platform application to render light field photographs taken by Lytro cameras. It remains closed source and is not maintained since Google’s acquisition of Lytro. [21] Several open-source tools have been released meanwhile. A Matlab tool for Lytro-type camera processing can be found. [38] PlenoptiCam is a GUI-based application considering Lytro's and custom-built plenoptic cameras with cross-platform compatibility and the source code being made available online. [39]
The depth of field (DOF) is the distance between the nearest and the furthest objects that are in acceptably sharp focus in an image captured with a camera. See also the closely related depth of focus.
In photography, bokeh is the aesthetic quality of the blur produced in out-of-focus parts of an image, whether foreground or background or both. It is created by using a wide aperture lens.
A camera lens is an optical lens or assembly of lenses used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically.
A light field, or lightfield, is a vector function that describes the amount of light flowing in every direction through every point in a space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.
Macro photography is extreme close-up photography, usually of very small subjects and living organisms like insects, in which the size of the subject in the photograph is greater than life-size . By the original definition, a macro photograph is one in which the size of the subject on the negative or image sensor is life-size or greater. In some senses, however, it refers to a finished photograph of a subject that is greater than life-size.
A digital single-lens reflex camera is a digital camera that combines the optics and mechanisms of a single-lens reflex camera with a solid-state image sensor and digitally records the images from the sensor.
In photography and optics, vignetting is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.
The following are common definitions related to the machine vision field.
Integral imaging is a three-dimensional imaging technique that captures and reproduces a light field by using a two-dimensional array of microlenses, sometimes called a fly's-eye lens, normally without the aid of a larger overall objective or viewing lens. In capture mode, in which a film or detector is coupled to the microlens array, each microlens allows an image of the subject as seen from the viewpoint of that lens's location to be acquired. In reproduction mode, in which an object or source array is coupled to the microlens array, each microlens allows each observing eye to see only the area of the associated micro-image containing the portion of the subject that would have been visible through that space from that eye's location. The optical geometry can perhaps be visualized more easily by substituting pinholes for the microlenses, as has actually been done for some demonstrations and special applications.
A photographic lens for which the focus is not adjustable is called a fixed-focus lens or sometimes focus-free. The focus is set at the time of lens design, and remains fixed. It is usually set to the hyperfocal distance, so that the depth of field ranges all the way down from half that distance to infinity, which is acceptable for most cameras used for capturing images of humans or objects larger than a meter.
The Sigma SD14 is a digital single-lens reflex camera produced by the Sigma Corporation of Japan. It is fitted with a Sigma SA mount which takes Sigma SA lenses.
In digital photography, the image sensor format is the shape and size of the image sensor.
A microlens is a small lens, generally with a diameter less than a millimetre (mm) and often as small as 10 micrometres (μm). The small sizes of the lenses means that a simple design can give good optical quality but sometimes unwanted effects arise due to optical diffraction at the small features. A typical microlens may be a single element with one plane surface and one spherical convex surface to refract the light. Because micro-lenses are so small, the substrate that supports them is usually thicker than the lens and this has to be taken into account in the design. More sophisticated lenses may use aspherical surfaces and others may use several layers of optical material to achieve their design performance.
The Micro Four Thirds system is a standard released by Olympus Imaging Corporation and Panasonic in 2008, for the design and development of mirrorless interchangeable lens digital cameras, camcorders and lenses. Camera bodies are available from Blackmagic, DJI, JVC, Kodak, Olympus, OM System, Panasonic, Sharp, and Xiaomi. MFT lenses are produced by Cosina Voigtländer, Kowa, Kodak, Mitakon, Olympus, Panasonic, Samyang, Sharp, Sigma, SLR Magic, Tamron, Tokina, TTArtisan, Veydra, Xiaomi, Laowa, Yongnuo, Zonlai, Lensbaby, Venus Optics and 7artisans amongst others.
A digital microscope is a variation of a traditional optical microscope that uses optics and a digital camera to output an image to a monitor, sometimes by means of software running on a computer. A digital microscope often has its own in-built LED light source, and differs from an optical microscope in that there is no provision to observe the sample directly through an eyepiece. Since the image is focused on the digital circuit, the entire system is designed for the monitor image. The optics for the human eye are omitted.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
A USB microscope is a low-powered digital microscope which connects to a computer's USB port. Microscopes essentially the same as USB models are also available with other interfaces either in addition to or instead of USB, such as via WiFi. They are widely available at low cost for use at home or in commerce. Their cost varies in the range of tens to thousands of dollars. In essence, a USB microscope is a webcam with a high-powered macro lens, and generally uses reflected rather than transmitted light, using built-in LED light sources surrounding the lens. The camera is usually sensitive enough not to need additional illumination beyond normal ambient lighting. The camera attaches directly to the USB port of a computer without the need for an eyepiece, and the images are shown directly on the computer's display.
Lytro, Inc. was an American company founded in 2006 by Ren Ng which developed some of the first commercially available light-field cameras. Lytro began shipping its first generation pocket-sized camera, capable of refocusing images after being taken, in 8 GB and 16 GB versions on February 29, 2012. In April 2014, the company announced Lytro Illum, its second generation camera for commercial and experimental photographers. The Lytro Illum was released at $1,600. The Illum has a permanently attached 30–250mm f/2.0 lens and an articulated rear screen. In the fall of 2015, Lytro changed direction, announcing Immerge, a very-high-end VR video capture camera with companion custom compute server. Immerge was expected to ship in 2016, and be useful to studios trying to combine CGI-based VR with video VR.
Raytrix GmbH is a German company founded by Christian Perwass and Lennart Wietzke that was the first to create and market commercial plenoptic cameras. The R5 camera produces images of 1 megapixel resolution, while the R11 produces 3 megapixel images. Unlike Lytro, which initially targeted the consumer market, the main market of Raytrix's cameras is industrial and scientific applications where depth information of each pixel can be more useful.
Lightfieldmicroscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field. This technique allows sub-second (~10 Hz) large volumetric imaging with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture and the wave optics picture. The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 and has been working on the cutting edge since then.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link)