Femto-photography

Last updated
Schematic of the active CUSP system for 70-Tfps imaging Schematic of the active CUSP system for 70-Tfps imaging.svg
Schematic of the active CUSP system for 70-Tfps imaging

Femto-photography is a technique for recording the propagation of ultrashort pulses of light through a scene at a very high speed (up to 1013 frames per second). A femto-photograph is equivalent to an optical impulse response of a scene and has also been denoted by terms such as a light-in-flight recording [1] or transient image. [2] [3] Femto-photography of macroscopic objects was first demonstrated using a holographic process in the 1970s by Nils Abramsson at the Royal Institute of Technology (Sweden). [1] A research team at the MIT Media Lab led by Ramesh Raskar, together with contributors from the Graphics and Imaging Lab at the Universidad de Zaragoza, Spain, more recently achieved a significant increase in image quality using a streak camera synchronized to a pulsed laser and modified to obtain 2D images instead of just a single scanline. [4] [5]

In their publications, Raskar's team claims to be able to capture exposures so short that light only traverses 0.6 mm (corresponding to 2 picoseconds, or 2×10−12 seconds) during the exposure period, [6] a figure that is in agreement with the nominal resolution of the Hamamatsu streak camera model C5680, [7] [8] on which their experimental setup is based. [9] Recordings taken using the setup have reached significant spread in the mainstream media, including a presentation by Raskar at TEDGlobal 2012. [10] Furthermore, the team was able to demonstrate the reconstruction of unknown objects "around corners", i.e., outside the line of sight of light source and camera, from femto-photographs. [9]

In 2013, researchers at the University of British Columbia demonstrated a computational technique that allows the extraction of transient images from time-of-flight sensor data without the need for ultrafast light sources or detectors. [11]

Other uses of the term

Prior to the aforementioned work, the term "femto-photography" had been used for certain proposed procedures in experimental nuclear physics. [12]

Related Research Articles

<span class="mw-page-title-main">Bokeh</span> Aesthetic quality of blur in the out-of-focus parts of an image

In photography, bokeh is the aesthetic quality of the blur produced in out-of-focus parts of an image, caused by Circles of Confusion. Bokeh has also been defined as "the way the lens renders out-of-focus points of light". Differences in lens aberrations and aperture shape cause very different bokeh effects. Some lens designs blur the image in a way that is pleasing to the eye, while others produce distracting or unpleasant blurring. Photographers may deliberately use a shallow focus technique to create images with prominent out-of-focus regions, accentuating their lens's bokeh.

<span class="mw-page-title-main">Streak camera</span>

A streak camera is an instrument for measuring the variation in a pulse of light's intensity with time. They are used to measure the pulse duration of some ultrafast laser systems and for applications such as time-resolved spectroscopy and LIDAR.

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The phrase light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.

<span class="mw-page-title-main">Computational photography</span> Set of digital image capture and processing techniques

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing. Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

A high-speed camera is a device capable of capturing moving images with exposures of less than 1/1,000 second or frame rates in excess of 250 fps. It is used for recording fast-moving objects as photographic images onto a storage medium. After recording, the images stored on the medium can be played back in slow motion. Early high-speed cameras used film to record the high-speed events, but were superseded by entirely electronic devices using an image sensor, recording, typically, over 1,000 fps onto DRAM, to be played back slowly to study the motion for scientific study of transient phenomena.

<span class="mw-page-title-main">Non-photorealistic rendering</span> Style of rendering

Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art, in contrast to traditional computer graphics, which focuses on photorealism. NPR is inspired by other artistic modes such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of cel-shaded animation as well as in scientific visualization, architectural illustration and experimental animation.

<span class="mw-page-title-main">Light field camera</span> Type of camera that can also capture the direction of travel of light rays

A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.

<span class="mw-page-title-main">Autostereoscopy</span> Any method of displaying stereoscopic images without the use of special headgear or glasses

Autostereoscopy is any method of displaying stereoscopic images without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays.

<span class="mw-page-title-main">Marc Levoy</span>

Marc Levoy is a computer graphics researcher and Professor Emeritus of Computer Science and Electrical Engineering at Stanford University, a vice president and Fellow at Adobe Inc., and a Distinguished Engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography.

<span class="mw-page-title-main">Pat Hanrahan</span> American computer graphics researcher

Patrick M. Hanrahan is an American computer graphics researcher, the Canon USA Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization. He has received numerous awards, including the 2019 Turing Award.

<span class="mw-page-title-main">Seam carving</span>

Seam carving is an algorithm for content-aware image resizing, developed by Shai Avidan, of Mitsubishi Electric Research Laboratories (MERL), and Ariel Shamir, of the Interdisciplinary Center and MERL. It functions by establishing a number of seams in an image and automatically removes seams to reduce image size or inserts seams to extend it. Seam carving also allows manually defining areas in which pixels may not be modified, and features the ability to remove whole objects from photographs.

Brian A. Barsky is a professor at the University of California, Berkeley, working in computer graphics and geometric modeling as well as in optometry and vision science. He is a Professor of Computer Science and Vision Science and an Affiliate Professor of Optometry. He is also a member of the Joint Graduate Group in Bioengineering, an inter-campus program, between UC Berkeley and UC San Francisco.

<span class="mw-page-title-main">Bokode</span> Data tags that are read out of focus

A bokode is a type of data tag which holds much more information than a barcode over the same area. They were developed by a team led by Ramesh Raskar at the MIT Media Lab. Bokodes are intended to be read by any standard digital camera, focusing at infinity. With this optical setup, the tiny code appears large enough to read. Bokodes are readable from different angles and from 4 metres (13 ft) away.

A 3D display is multiscopic if it projects more than two images out into the world, unlike conventional 3D stereoscopy, which simulates a 3D scene by displaying only two different views of it, each visible to only one of the viewer's eyes. Multiscopic displays can represent the subject as viewed from a series of locations, and allow each image to be visible only from a range of eye locations narrower than the average human interocular distance of 63 mm. As a result, not only does each eye see a different image, but different pairs of images are seen from different viewing locations.

<span class="mw-page-title-main">Ramesh Raskar</span>

Ramesh Raskar is a Massachusetts Institute of Technology associate professor and head of the MIT Media Lab's Camera Culture research group. Previously he worked as a senior research scientist at Mitsubishi Electric Research Laboratories (MERL) during 2002 to 2008. He holds 132 patents in computer vision, computational health, sensors and imaging. He received the $500K Lemelson–MIT Prize in 2016. The prize money will be used for launching REDX.io, a group platform for co-innovation in Artificial Intelligence. He is well known for inventing EyeNetra, EyeCatra and EyeSelfie, Femto-photography and his TED talk for cameras to see around corners.

Gradient domain image processing, also called Poisson image editing, is a type of digital image processing that operates on the differences between neighboring pixels, rather than on the pixel values directly. Mathematically, an image gradient represents the derivative of an image, so the goal of gradient domain processing is to construct a new image by integrating the gradient, which requires solving Poisson's equation.

Epsilon photography is a form of computational photography wherein multiple images are captured with slightly varying camera parameters such as aperture, exposure, focus, film speed and viewpoint for the purpose of enhanced post-capture flexibility. The term was coined by Prof. Ramesh Raskar. The technique has been developed as an alternative to light field photography that requires no specialized equipment. Examples of epsilon photography include focal stack photography, High dynamic range (HDR) photography, lucky imaging, multi-image panorama stitching and confocal stereo. The common thread for all the aforementioned imaging techniques is that multiple images are captured in order to produce a composite image of higher quality, such as richer color information, wider-field of view, more accurate depth map, less noise/blur and greater resolution.

<span class="mw-page-title-main">Light-in-flight imaging</span>

Light-in-flight imaging — a set of techniques to visualize propagation of light through different media.

Michael F. Cohen is an American computer scientist and researcher in computer graphics. He was a senior research scientist at Microsoft Research for 21 years until he joined Facebook Research in 2015. In 1998, he received the ACM SIGGRAPH CG Achievement Award for his work in developing radiosity methods for realistic image synthesis. He was elected a Fellow of the Association for Computing Machinery in 2007 for his "contributions to computer graphics and computer vision." In 2019, he received the ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics for “his groundbreaking work in numerous areas of research—radiosity, motion simulation & editing, light field rendering, matting & compositing, and computational photography”.

<span class="mw-page-title-main">Coded exposure photography</span> Motion blur reduction technology

Coded exposure photography, also known as a flutter shutter, is the name given to any mathematical algorithm that reduces the effects of motion blur in photography. The key element of the coded exposure process is the mathematical formula that affects the shutter frequency. This involves the calculation of the relationship between the photon exposure of the light sensor and the randomized code. The camera is made to take a series of snapshots with random time intervals using a simple computer, this creates a blurred image that can be reconciled into a clear image using the algorithm.

References

  1. 1 2 Abramsson, Nils (1978). "Light-in-flight recording by holography". Optics Letters. 3 (4): 121–123. Bibcode:1978OptL....3..121A. doi:10.1364/OL.3.000121. PMID   19684717.
  2. Smith, Adam; James Skorupski; James Davis (2008). "Transient Rendering". Technical Report, School of Engineering, University of California Santa Cruz. UCSC-SOE-08-26. Retrieved 25 February 2023.
  3. Kirmani, A.; Hutchison, T.; Davis, J.; Raskar, R. (2009). "Looking around the corner using transient imaging". 2009 IEEE 12th International Conference on Computer Vision. pp. 159–166. CiteSeerX   10.1.1.308.3180 . doi:10.1109/ICCV.2009.5459160. ISBN   978-1-4244-4420-5. S2CID   3167340.
  4. Velten, Andreas; Lawson, Everett; Bardagjy, Andrew; Bawendi, Moungi; Raskar, Ramesh (2011-12-13). "Slow art with a trillion frames per second camera". ACM SIGGRAPH 2011 Posters. Web.media.mit.edu. p. 1. doi:10.1145/2037715.2037730. ISBN   9781450309714. S2CID   9641010 . Retrieved 2012-10-04.
  5. Velten, Andreas; Di Wu; Adrian Jarabo; Belen Masia; Christopher Barsi; Chinmaya Joshi; Everett Lawson; Moungi Bawendi; Diego Gutierrez; Ramesh Raskar (July 2013). "Femto-Photography: Capturing and Visualizing the Propagation of Light" (PDF). ACM Transactions on Graphics. 32 (4). doi:10.1145/2461912.2461928. hdl: 1721.1/82039 . S2CID   14478222 . Retrieved 21 November 2013.
  6. Velten, Andreas; Lawson, Everett; Bardagjy, Andrew; Bawendi, Moungi; Raskar, Ramesh (2011). "Slow art with a trillion frames per second camera". ACM SIGGRAPH 2011 Posters. Dl.acm.org. p. 1. doi:10.1145/2037715.2037730. ISBN   9781450309714. S2CID   9641010.
  7. Hamamatsu Corporation. "Universal Streak Camera C5680 Series - Measurements Ranging From X-Ray to Near Infrared With a Temporal Resolution of 2 ps" . Retrieved 2013-11-22.
  8. Information from alldatasheet.com
  9. 1 2 Velten, Andreas; Thomas Willwacher; Otkrist Gupta; Ashok Veeraraghavan; Moungi G. Bawendi; Ramesh Raskar (20 March 2012). "Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging". Nature Communications. 3: 745. Bibcode:2012NatCo...3..745V. CiteSeerX   10.1.1.434.2312 . doi:10.1038/ncomms1747. PMID   22434188. S2CID   16770765.
  10. TEDGlobal 2012. "Ramesh Raskar: Imaging at a trillion frames per second | Video on". Ted.com. Retrieved 2012-10-04.
  11. Heide, Felix; Matthias B. Hullin; James Gregson; Wolfgang Heidrich. "Low-Budget Transient Imaging using Photonic Mixer Devices". In: ACM Trans. Graph. 32(4) (Proc. SIGGRAPH 2013). Retrieved 22 November 2013.
  12. Nucleon hologram with exclusive leptoproduction. May 15–18, 2002. ISBN   9789812382559 . Retrieved 4 October 2012.{{cite book}}: |work= ignored (help)