Ramesh Raskar

Last updated

Ramesh Raskar
Ramesh Raskar (11539797465).jpg
Ramesh Raskar in 2013.
Born1970
CitizenshipIndian
Alma mater University of North Carolina at Chapel Hill
Government College of Engineering Pune (COEP), University of Pune
Purushottam English School, Nashik
Known for Shader lamps, Femtophotography, CORNAR, Computational photography, HR3D, EyeNetra StreetAddressForAll
AwardsTR100, Lemelson–MIT Prize, ACM SIGGRAPH Achievement Award 2017
Scientific career
Fields Computer scientist
Institutions Massachusetts Institute of Technology
Doctoral advisor Henry Fuchs and Greg Welch

Ramesh Raskar is a Massachusetts Institute of Technology associate professor and head of the MIT Media Lab's Camera Culture research group. [2] [3] [4] Previously he worked as a senior research scientist at Mitsubishi Electric Research Laboratories (MERL) during 2002 to 2008. [5] He holds 132 patents in computer vision, computational health, sensors and imaging. [6] [7] He received the $500K Lemelson–MIT Prize in 2016. [8] The prize money will be used for launching REDX.io, a group platform for co-innovation in Artificial Intelligence. [9] He is well known for inventing EyeNetra (mobile device to calculate spectacle glasses prescription), EyeCatra (cataract screening) and EyeSelfie (retinal imaging), Femto-photography (trillion frames per second imaging)[ citation needed ] and his TED talk for cameras to see around corners. [10]

Contents

In February 2020, Raskar and his team launched Private Kit: SafePaths, a public health tool for contact tracing for COVID-19 pandemic. He is also the Founder and Chief Scientist of PathCheck. He is a co-founder of Akasha.im which was acquired by Alphabet spin-off company Intrinsic. [11]

Early life and education

Ramesh Raskar was born in Nashik, India and he finished his engineering education from College of Engineering, Pune. [12] [13] He finished his PhD at UNC Chapel Hill in 2002. [14] [15]

Mitsubishi Electric Research Laboratories

Raskar joined Mitsubishi Electric Research Laboratories in 2002. [16] His significant contribution in computer vision and imaging domain led him to win 'TR 100' in 2004, 'The Global Indus Technovator Award' in 2004 respectively. [17] [18]

MIT Media Lab

Raskar joined MIT Media Lab in 2008. [19] Raskar, together with others developed a computational display technology that allows observers with refractive errors, cataracts and some other eye disorders to perceive a focused image on a screen without wearing refraction-corrective spectacles. The technology uses a light field display in combination with customized filtering algorithms that pre-distort the presented content for the observer. [20] [21]

His lab produced a number of extreme highspeed pictures using a femto-camera that took images at around one-trillion frames per second. [22] They have also developed a camera to see around corners using bursts of laser light. [23]

Juliett Fiss has covered his role as the catalyst behind the Siggraph NEXT program at Siggraph 2015 in Los Angeles. [24]

Raskar was awarded the "2017 CG Achievement Award" by ACM SIGGRAPH for his potential contribution in computational photography and light transport and their applications for social impact. [25]

He has been influential in deploying research ideas in the real world. Startups created by members of his CameraCulture research group include EyeNetra.com (ophthalmic tests), Photoneo (high speed 3D sensing), Labby (AI for food testing), Lumii (novel printing for 3D imagery), LensBricks (computer vision with computational imaging), Tesseract (personalized display) and more. Non-profits emerging from his efforts include REDX.io (AI for Social Impact), MIT Emerging Worlds, LVP-MITra, REDX-WeSchool, DigitalImpactSquare and more.

He serves on the Expert Commission of $3.5 Billion Botnar Fondation as AI and Health expert.

JJ Abrams and Ramesh Raskar at MIT Media Lab, 2012 JJ Abrams and Ramesh Raskar.jpg
JJ Abrams and Ramesh Raskar at MIT Media Lab, 2012

Philosophies on innovation

Raskar has presented a series of talks and workshops on innovation processes.

They include his Idea Hexagon, How to give an engaging talk, How to prepare for a thesis, How to write a paper and the Spot-Probe method for problem-solution identification. In 2019, he presented doctoral hooding commencement speech at UNC Chapel Hill. [26]

Key ideas from his interview with Lemelson Foundation are as follows.

Idea Hexagon framework by Ramesh Raskar depicts how to invent new ideas from a given a central idea 'X' using six formulas. IdeaHexagonRaskar2.png
Idea Hexagon framework by Ramesh Raskar depicts how to invent new ideas from a given a central idea 'X' using six formulas.

See the world in a new or different way, and great things will happen. The next generation of young inventors will then spot a whole new set of problems and probe for solutions that no one can begin to predict. [27]

Philosophy of DAPS/DOPS and its global impact

In his recent talk, Raskar mentioned, "Instead of apps, let’s think about DAPS (Digital Applications for Physical Services) Or DOPS. If you want to make it broader, we can have DOPS (Digital Opportunities for Physical Services). With DOPS and DAPS we have an opportunity to impact the physical world in areas where we simply couldn’t before". [28]

REDX.io

Raskar's philosophy on 'Learn, Think and Apply' encourages him to form REDX.io platform. REDX's goal is to promote peer to peer learning, peer to peer problem solving in more systematic ways! REDX labs are working on following keywords: Wearables, Agriculture, Camera, Health, Unorganized Sector, Satellite Imaging, Machine Learning, Mobile, Social Graph, Crowd Sourcing, Sensors. They are physical lab with very well-funded and innovators working with critical problems. REDX Mumbai is funded by TATA trust. DISQ in Nashik funded by TCS foundations, a multibillion-dollar lab. REDX lab in Brazil is well funded by local trust. REDX clubs operate as non-profit organizations. Innovators and their solutions have the opportunity to interact with other REDX clubs and work in REDX labs worldwide. The onboarding process to become a REDX club includes a 10-week course, appointing a board and an academic advisor, establishing a community coalition, and recruiting innovators and mentors. Clubs receive certification directly from Dr.Raskar. [29]

Awards and fellowships

Related Research Articles

The Lemelson–MIT Program awards several prizes yearly to inventors in the United States. The largest is the Lemelson–MIT Prize which was endowed in 1994 by Jerome H. Lemelson, funded by the Lemelson Foundation, and is administered through the School of Engineering at the Massachusetts Institute of Technology. The winner receives $500,000, making it the largest cash prize for invention in the U.S.

<span class="mw-page-title-main">MIT Media Lab</span> Research laboratory at the Massachusetts Institute of Technology

The MIT Media Lab is a research laboratory at the Massachusetts Institute of Technology, growing out of MIT's Architecture Machine Group in the School of Architecture. Its research does not restrict to fixed academic disciplines, but draws from technology, media, science, art, and design. As of 2014, Media lab's research groups include neurobiology, biologically inspired fabrication, socially engaging robots, emotive computing, bionics, and hyperinstruments.

The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The phrase light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.

<span class="mw-page-title-main">Computational photography</span> Set of digital image capture and processing techniques

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing. Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

A high-speed camera is a device capable of capturing moving images with exposures of less than 1/1,000 second or frame rates in excess of 250 fps. It is used for recording fast-moving objects as photographic images onto a storage medium. After recording, the images stored on the medium can be played back in slow motion. Early high-speed cameras used photographic film to record the high-speed events, but have been superseded by entirely electronic devices using an image sensor, recording, typically, over 1,000 fps onto DRAM, to be played back slowly to study the motion for scientific study of transient phenomena.

<span class="mw-page-title-main">Robert S. Langer</span> American scientist

Robert Samuel Langer Jr. FREng is an American biotechnologist, businessman, chemical engineer, chemist, and inventor. He is one of the twelve Institute Professors at the Massachusetts Institute of Technology.

<span class="mw-page-title-main">Ashok Gadgil</span>

Ashok Gadgil Is the Andrew and Virginia Rudd Family Foundation Distinguished Chair and Professor of Safe Water and Sanitation at the University of California, Berkeley. He is a Faculty Senior Scientist and has served as director of the Energy and Environmental Technologies Division at Lawrence Berkeley National Laboratory.

<span class="mw-page-title-main">Autostereoscopy</span> Any method of displaying stereoscopic images without the use of special headgear or glasses

Autostereoscopy is any method of displaying stereoscopic images without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays.

The Lemelson Foundation is an American 501(c)(3) private foundation. It was started in 1993 by Jerome H. Lemelson and his wife Dorothy. The foundation held total net assets of US$444,124,049 at the end of 2020 and US$484,432,021 at the end of 2021.

<span class="mw-page-title-main">Bokode</span> Data tags that are read out of focus

A bokode is a type of data tag which holds much more information than a barcode over the same area. They were developed by a team led by Ramesh Raskar at the MIT Media Lab. Bokodes are intended to be read by any standard digital camera, focusing at infinity. With this optical setup, the tiny code appears large enough to read. Bokodes are readable from different angles and from 4 metres (13 ft) away.

A 3D display is multiscopic if it projects more than two images out into the world, unlike conventional 3D stereoscopy, which simulates a 3D scene by displaying only two different views of it, each visible to only one of the viewer's eyes. Multiscopic displays can represent the subject as viewed from a series of locations, and allow each image to be visible only from a range of eye locations narrower than the average human interocular distance of 63 mm. As a result, not only does each eye see a different image, but different pairs of images are seen from different viewing locations.

NETRA is a mobile eye diagnostic device developed at MIT Media Lab consisting of a clip-on eyepiece and a software app for smart phones. The co-inventors include Ramesh Raskar and Vitor Pamplona. It can be seen as the inverse of expensive Shack-Hartmann sensors. NETRA allows for the early, low-cost diagnosis of the most common refractive Refractive Disorders. The subject looks into the device and aligns patterns on the display. By repeating this procedure for eight meridians, the required refractive correction is computed. NETRA exploits the fact that aberrations are expressed using only a few parameters to create an easier user interaction approach. Leveraging mobile connectivity, the system can transmit test data to appropriate facilities for immediate action, aggregate data for use in analysis, or instruct a separate machine for automatic dispensing of spectacles.

<span class="mw-page-title-main">Femto-photography</span>

Femto-photography is a technique for recording the propagation of ultrashort pulses of light through a scene at a very high speed (up to 1013 frames per second). A femto-photograph is equivalent to an optical impulse response of a scene and has also been denoted by terms such as a light-in-flight recording or transient image. Femto-photography of macroscopic objects was first demonstrated using a holographic process in the 1970s by Nils Abramsson at the Royal Institute of Technology (Sweden). A research team at the MIT Media Lab led by Ramesh Raskar, together with contributors from the Graphics and Imaging Lab at the Universidad de Zaragoza, Spain, more recently achieved a significant increase in image quality using a streak camera synchronized to a pulsed laser and modified to obtain 2D images instead of just a single scanline.

<span class="mw-page-title-main">John Werner</span> American nonprofit executives

John Werner is the founder of Ideas in Action, Inc. (IIA) and Managing Director at Link Ventures. Prior he was a Vice President at an augmented reality company. He is also founding Managing Director for MIT Media Lab's Emerging Worlds Special Interest Group (SIG), and former Head of Innovation and New Ventures for the Camera Culture Group at the MIT Media Lab for Ramesh Raskar, director of the Camera Culture Group at MIT Media Lab. He is one of the founding members of the non-profit organization Citizen Schools and the curator of TEDxBeaconStreet, and TEDxMIT with Daniela Rus - both independent event licensed by TED as part of TEDx. He started first ever AR-in-Action Augmented Reality Conference "ARIA" at MIT Media Lab in January 2017 and Blockchain+AI+Human = Magic Summit, now called Imagination in Action at MIT and Davos which he curates with Professor Sandy Pentland.

Epsilon photography is a form of computational photography wherein multiple images are captured with slightly varying camera parameters such as aperture, exposure, focus, film speed and viewpoint for the purpose of enhanced post-capture flexibility. The term was coined by Prof. Ramesh Raskar. The technique has been developed as an alternative to light field photography that requires no specialized equipment. Examples of epsilon photography include focal stack photography, High dynamic range (HDR) photography, lucky imaging, multi-image panorama stitching and confocal stereo. The common thread for all the aforementioned imaging techniques is that multiple images are captured in order to produce a composite image of higher quality, such as richer color information, wider-field of view, more accurate depth map, less noise/blur and greater resolution.

<span class="mw-page-title-main">Light-in-flight imaging</span>

Light-in-flight imaging — a set of techniques to visualize propagation of light through different media.

Jay Saul Silver is an electrical engineer and toy inventor from Cocoa Beach, Florida. Silver is the Founder and CEO of JoyLabz and MaKey MaKey and was the first-ever Maker Research Scientist at Intel.

<span class="mw-page-title-main">Hao Li</span> American computer scientist & university professor

Hao Li is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision. He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). He was previously a Distinguished Fellow at the University of California, Berkeley, an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.

<span class="mw-page-title-main">Alexis Lewis</span>

Alexis Lewis is an American inventor, repeat science fair winner and public speaker. She is best known for her advocacy for invention education and her humanitarian inventions. She is known to have given talks at the White House, Smithsonian, SXSW, National Maker Faire, the 2018 Social Innovation Summit, and at TEDx events. Lewis also gained repeat standing as a finalist or winner in national and international science and invention fairs, including the 2012 Broadcom MASTERS Challenge. In the course of the Spark!Lab Invent It Challenge, Lewis won pro-bono patent counsel, and holds a patent on the "Rescue Travois", granted 2015 and has one pending, on the "Emergency Mask Pod".

Todor G. Georgiev is a Bulgarian American research scientist and inventor, best known for his work on plenoptic cameras. He is the author of the Healing Brush tool in Adobe Photoshop, and, as of 2020, is a principal scientist at Adobe in San Jose, California. Georgiev's work has been cited 7700 times as of 2020. As an inventor, he has at least 89 patents to his name.

References

  1. "MIT and the shortcut to Nirvana". The Boston Globe. Retrieved 12 September 2016.
  2. "BBC News - Super-camera shows how light moves". Bbc.co.uk. 1 January 1970. Retrieved 2 August 2013.
  3. "MIT experts embark on health-mapping scheme". The Times of India. 29 August 2015. Retrieved 12 September 2016.
  4. "Exclusive: MIT Professor Ramesh Raskar busts biggest Startup Myths". The Business Insider. 8 February 2016. Retrieved 12 September 2016.
  5. "In Profile: Ramesh Raskar". MIT News. Retrieved 12 September 2016.
  6. Raskar, Ramesh. "Patent portfolio". USPTO.
  7. Raskar, Ramesh. "Patent Timeline" (PDF). Lemelson-MIT.
  8. "Imaging Scientist and Social Impact Inventor Awarded $500,000 Lemelson-MIT Prize". Lemelson-MIT Prize.
  9. "This Winner of a Big Foundation Prize Aims to Boost Other "Impact Inventors"".
  10. Raskar, Ramesh (26 July 2012), Imaging at a trillion frames per second , retrieved 24 January 2018
  11. "Blog — A new chapter for Intrinsic". Intrinsic. Retrieved 20 July 2022.
  12. "In Profile: Ramesh Raskar".
  13. "MIT and the shortcut to Nirvana". The Boston Globe. Retrieved 12 September 2016.
  14. "In Profile: Ramesh Raskar". MIT News. Retrieved 12 September 2016.
  15. "Ramesh Raskar to give 2019 Doctoral Hooding Ceremony keynote address". UNC. 4 March 2019. Retrieved 5 January 2023.
  16. "In Profile: Ramesh Raskar". MIT News. Retrieved 12 September 2016.
  17. "MIT Professor Ramesh Raskar busts biggest Startup Myths". Business Insider India. Retrieved 12 September 2016.
  18. "Technovator Awards". MIT.
  19. "In Profile: Ramesh Raskar". MIT News. Retrieved 12 September 2016.
  20. Pamplona, Vitor F.; Oliveira, Manuel M.; Aliaga, Daniel G.; Raskar, Ramesh (2012). "Tailored Displays to Compensate for Visual Aberrations". ACM Transactions on Graphics. 31 (4): 1–12. doi:10.1145/2185520.2185577. S2CID   914854.
  21. Huang, Fu-Chung; Wetzstein, Gordon; Barsky, Brian A.; Raskar, Ramesh (2014). "Eyeglasses-free display". ACM Transactions on Graphics. 33 (4): 1–12. doi:10.1145/2601097.2601122. hdl: 1721.1/92749 . S2CID   12347886.
  22. "Ramesh Raskar | Profile on". Ted.com. Retrieved 2 August 2013.
  23. Jones, Orion (30 September 2011). "Ramesh Raskar: An Immigrant's Story | IdeaFeed". Big Think. Retrieved 2 August 2013.
  24. "What is Siggraph NEXT". 1 January 1970. Retrieved 30 October 2013.
  25. "2017 CG Achievement Award: Ramesh Raskar".
  26. UNC-Chapel Hill (17 May 2019), Ramesh Raskar | 2019 Doctoral Hooding Ceremony Keynote Address | UNC-Chapel Hill , retrieved 18 May 2019
  27. "A Conversation with Ramesh Raskar". 5 April 2017. Retrieved 25 April 2017.
  28. "How to impact on billions of lives through disruptive innovations".
  29. "This Winner of a Big Foundation Prize Aims to Boost Other "Impact Inventors"".
  30. "MIT Professor Ramesh Raskar busts biggest Startup Myths". Business Insider India. Retrieved 12 September 2016.
  31. "Technovator Awards". MIT.
  32. "Six junior faculty named Sloan Research Fellows". MIT News. Retrieved 12 September 2016.
  33. "DARPA Young Faculty Award". Northeastern University.
  34. "Awards Info". DARPA. Retrieved 12 September 2016.
  35. "List of DARPA Award recipients" (PDF). Retrieved 12 September 2016.
  36. "Innovating for Billions: Inverting the Research and Funding Models". Stanford University. Retrieved 12 September 2016.
  37. "Intro of Dr Raskar". GES. Global Entrepreneur sUMMIT. Retrieved 12 September 2016.
  38. "Raskar was awarded PharmaVOICE 100 his positive contributions to the life-sciences industry".
  39. "Ramesh Raskar Intro". Tata Center at MIT. Tata Center for Technology and Design. Retrieved 12 September 2016.
  40. "Inverting the Venture Model, Ramesh Raskar's REDX platform- Congrats to Ramesh Raskar for receiving the $500,000 Lemelson-MIT Prize Invention-Imaging scientist and inventor sets sights on launching peer-to-peer invention platforms for global impact". John Werner. The Medium. Retrieved 14 September 2016.
  41. "Ramesh Raskar Inventor of Femto-photography; Awarded $500,000 Lemelson-MIT Prize". Lemelson MIT. Massachusetts Institute of Technology. Retrieved 14 September 2016.
  42. "2017 CG Achievement Award: Ramesh Raskar".
  43. "The Jack Dangermond Award". isprs.org. Retrieved 15 May 2019.