Marc Levoy

Last updated
Marc Levoy
Marc Levoy.jpg
Born (1953-11-02) November 2, 1953 (age 70)
NationalityAmerican
Alma mater Cornell University
University of North Carolina at Chapel Hill
Known for Volume rendering
Light fields
3D scanning
Stanford Bunny
Computational photography
Awards SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007), National Academy of Engineering (2022)
Scientific career
Fields Computer Graphics, Computer Vision
Institutions Stanford University
Google, Adobe Inc.

Marc Levoy is a computer graphics researcher and Professor Emeritus of Computer Science and Electrical Engineering at Stanford University, a vice president and Fellow at Adobe Inc., and (until 2020) a Distinguished Engineer at Google. He is noted for pioneering work in volume rendering, light fields, and computational photography.

Contents

Education and early career

Levoy first studied computer graphics as an architecture student under Donald P. Greenberg at Cornell University. He received his B.Arch. in 1976 and M.S. in architecture in 1978. He developed a 2D computer animation system as part of his studies, receiving the Charles Goodwin Sands Memorial Medal for this work. Greenberg and he suggested to Disney that they use computer graphics in producing animated films, but the idea was rejected by several of the Nine Old Men who were still active. Following this, they were able to convince Hanna-Barbera Productions to use their system for television animation. Despite initial opposition by animators, the system was successful in reducing labor costs and helping to save the company, and was used until 1996. [1] Levoy worked as director of the Hanna-Barbera Animation Laboratory from 1980 to 1983.

He then did graduate study in computer science under Henry Fuchs at the University of North Carolina at Chapel Hill, and received his Ph.D. in 1989. While there, he published several important papers in the field of volume rendering, developing new algorithms (such as volume ray tracing), improving efficiency, and demonstrating applications of the technique. [2]

Teaching career

He joined the faculty of Stanford's Computer Science Department in 1990. In 1991, he received the National Science Foundation's Presidential Young Investigator Award. In 1994, he co-created the Stanford Bunny, which has become an icon of computer graphics. In 1996, he and Pat Hanrahan coauthored the paper, "Light Field Rendering," which forms the basis behind many image-based rendering techniques in modern-day computer graphics. His lab also worked on applications of light fields, developing technologies such as a light-field camera and light-field microscope, and on computational photography. (The phrase "computational photography" was first used by Steve Mann in 1995.[ citation needed ] It was re-coined and given a broader meaning by Levoy for a course he taught at Stanford in 2004 [3] and a symposium he co-organized in 2005. [4] )

Google

Levoy took a leave of absence from Stanford in 2011 to work at GoogleX as part of Project Glass. In 2014, he retired from Stanford to become full-time at Google, where until 2020 he led a team in Google Research [5] that worked broadly on cameras and photography. One of his projects was HDR+ mode [6] for Google Pixel smartphones. [7] In 2016, the French agency DxO gave the Pixel the highest rating ever given to a smartphone camera, [8] and again in 2017 for the Pixel 2. [9] His team also developed Portrait Mode, a single-camera background defocus technology launched in October 2017 on Pixel 2, [10] and Night Sight, a technology for taking handheld pictures without flash in very low light launched in November 2018 on all generations of Pixel phones. [11] Finally, his team worked on underlying technologies for Project Jump, [12] a light field camera that captures stereo panoramic videos for VR headsets. [13] Although Levoy no longer teaches at Stanford, a course he taught on digital photography [14] that was rerecorded at Google in 2016 is available online for free. [15]

Awards and honors

For his work in volume rendering, Levoy was the recipient of the ACM SIGGRAPH Computer Graphics Achievement Award in 1996. [2] In 2007, he was inducted as a Fellow of the Association for Computing Machinery "for contributions to computer graphics". [16] In 2022 he was elected to the National Academy of Engineering "for contributions to computer graphics and digital photography". [17]

Notable publications

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

<span class="mw-page-title-main">Multi-exposure HDR capture</span> Technique to capture HDR images and videos

In photography and videography, multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images by taking and combining multiple exposures of the same subject matter at different exposure levels. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures.

<span class="mw-page-title-main">Anisotropic filtering</span> Method of enhancing the image quality of textures on surfaces of computer graphics

In 3D computer graphics, anisotropic filtering is a method of enhancing the image quality of textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the projection of the texture appears to be non-orthogonal.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The phrase light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.

<span class="mw-page-title-main">Stanford bunny</span> Computer graphics 3D reference model

The Stanford bunny is a computer graphics 3D test model developed by Greg Turk and Marc Levoy in 1994 at Stanford University. The model consists of 69,451 triangles, with the data determined by 3D scanning a ceramic figurine of a rabbit. This figurine and others were scanned to test methods of range scanning physical objects.

Greg Turk is an American-born researcher in the field of computer graphics and a professor at the School of Interactive Computing in the College of Computing at the Georgia Institute of Technology. His paper "Zippered polygon meshes from range images", concerning the reconstruction of surfaces from point data, brought the "Stanford bunny", a frequently used example object in computer graphics research, into the CGI lexicon. Turk actually purchased the original Stanford Bunny and performed the initial scans on it. He is also known for his work on simplification of surfaces, and on reaction–diffusion-based texture synthesis. In 2008, Turk was the technical papers chair of SIGGRAPH 2008. In 2012, Greg Turk was awarded the ACM Computer Graphics Achievement Award 2012.

<span class="mw-page-title-main">Non-photorealistic rendering</span> Style of rendering

Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art, in contrast to traditional computer graphics, which focuses on photorealism. NPR is inspired by other artistic modes such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of cel-shaded animation as well as in scientific visualization, architectural illustration and experimental animation.

<span class="mw-page-title-main">High-dynamic-range rendering</span> Rendering of computer graphics scenes by using lighting calculations done in high-dynamic-range

High-dynamic-range rendering, also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in high dynamic range (HDR). This allows preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies and special effects benefit from this as it creates more realistic scenes than with more simplistic lighting models.

<span class="mw-page-title-main">Light field camera</span> Type of camera that can also capture the direction of travel of light rays

A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.

Kurt Akeley is an American computer graphics engineer.

<span class="mw-page-title-main">Gonioreflectometer</span>

A gonioreflectometer is a device for measuring a bidirectional reflectance distribution function (BRDF).

<span class="mw-page-title-main">Pat Hanrahan</span> American computer graphics researcher

Patrick M. Hanrahan is an American computer graphics researcher, the Canon USA Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization. He has received numerous awards, including the 2019 Turing Award.

<span class="mw-page-title-main">Computer graphics (computer science)</span> Sub-field of computer science

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

<span class="mw-page-title-main">Shree K. Nayar</span>

Shree K. Nayar is an engineer and computer scientist known for his contributions to the fields of computer vision, computational imaging, and computer graphics. He is the T. C. Chang Professor of Computer Science in the School of Engineering at Columbia University. Nayar co-directs the Columbia Vision and Graphics Center and is the head of the Computer Vision Laboratory (CAVE), which develops advanced imaging and computer vision systems. Nayar also serves as a director of research at Snap Inc. He was elected member of the US National Academy of Engineering in 2008 and the American Academy of Arts and Sciences in 2011 for his pioneering work on computational cameras and physics based computer vision.

Gradient domain image processing, also called Poisson image editing, is a type of digital image processing that operates directly on the differences between neighboring pixels, rather than on the pixel values. Mathematically, an image gradient represents the derivative of an image, so the goal of gradient domain processing is to construct a new image by integrating the gradient, which requires solving Poisson's equation.

<span class="mw-page-title-main">Pixel Camera</span> Camera application developed by Google for Pixel devices

Pixel Camera, formerly Google Camera, is a camera phone application developed by Google for the Android operating system. Development for the application began in 2011 at the Google X research incubator led by Marc Levoy, which was developing image fusion technology for Google Glass. It was publicly released for Android 4.4+ on the Google Play on April 16, 2014. It was initially supported on all devices running Android 4.4 KitKat and higher, but became only officially supported on Google Pixel devices in the following years. The app was renamed Pixel Camera in October 2023, with the launch of the Pixel 8 and Pixel 8 Pro.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Michael F. Cohen</span> American computer scientist

Michael F. Cohen is an American computer scientist and researcher in computer graphics. He is currently a Senior Fellow at Meta in their Generative AI Group. He was a senior research scientist at Microsoft Research for 21 years until he joined Facebook in 2015. In 1998, he received the ACM SIGGRAPH CG Achievement Award for his work in developing radiosity methods for realistic image synthesis. He was elected a Fellow of the Association for Computing Machinery in 2007 for his "contributions to computer graphics and computer vision." In 2019, he received the ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics for “his groundbreaking work in numerous areas of research—radiosity, motion simulation & editing, light field rendering, matting & compositing, and computational photography”.

References

  1. "1976 Charles Goodwin Sands Memorial Medal".
  2. 1 2 "1996 SIGGRAPH Achievement Award". 19 October 2021.
  3. "Stanford University — CS 448 (2004)".
  4. "2005 Symposium on Computational Photography and Video".
  5. "Google Research".
  6. "HDR+: Low Light and High Dynamic Range photography in the Google Camera App". Google Research Blog. 2014.
  7. "HDR+". 18 October 2016.
  8. "Pixel smartphone camera review: At the top". DxOMark. 2016.
  9. "Google Pixel 2 reviewed: Sets new record for overall smartphone camera quality". DxOMark. 2017.
  10. Marc Levoy & Yael Pritch (October 17, 2017). "Portrait mode on the Pixel 2 and Pixel 2 XL smartphones".
  11. Marc Levoy & Yael Pritch (November 14, 2018). "Night Sight: Seeing in the Dark on Pixel Phones". Google AI Blog.
  12. "Google — Jump".
  13. Robert Anderson; David Gallup; Jonathan T. Barron; Janne Kontkanen; Noah Snavely; Carlos Hernandez Esteban; Sameer Agarwala; Steven M. Seitz (2016). "Jump: Virtual Reality Video". Proc. SIGGRAPH Asia (PDF). ACM.
  14. "Stanford University — CS 178 (2014)".
  15. Marc Levoy (2016). "Lectures on Digital Photography".
  16. "Marc Levoy – ACM Fellows (2007)". awards.acm.org. Retrieved 2018-12-09.
  17. "Marc Levoy - Member, National Academy of Engineering (2022)". www.nae.edu. Retrieved 2022-04-05.