PrimeSense

Last updated
PrimeSense
Company type Subsidiary of Apple Inc.
IndustryFabless semiconductor
Founded2005
Defunct2013
FateAcquired by Apple Inc.
Headquarters Israel
Key people
  • Aviad Maizels (president and founder)
  • Alexander Shpunt (chief technology officer and cofounder)
  • Inon Beracha (CEO)
Products
  • PrimeSense 3D sensor
  • Carmine 1.08
  • Carmine 1.09 (short range)
  • Capri 1.25 (not yet available)
  • Carmine 1080 SoC
  • Capri 1200 SoC (not yet available)
  • NiTE middleware
Parent Apple Inc.
Website primesense.com   OOjs UI icon edit-ltr-progressive.svg

PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.

Contents

History

PrimeSense was a fabless semiconductor company and provided products in the area of sensory inputs for consumer and commercial markets.

PrimeSense's technology had been originally applied to gaming but was later applied in other fields. [1] PrimeSense was best known for licensing the hardware design and chip used in Microsoft's Kinect motion-sensing system for the Xbox 360 in 2010. [2] The company had been founded in 2005 to explore depth-sensing cameras which they had demonstrated to developers at the 2006 Game Developers Conference. Microsoft had been looking on its own at 3D camera technology applications to its Xbox line of consoles, and engaged with PrimeSense after the conference to help establish the direction the technology needed to go to make it into consumer-grade products, while Microsoft improved on additional software aspects and incorporated machine learning to help with motion detection. [3]

On November 24, 2013, Apple Inc. confirmed the purchase of PrimeSense for $360 million. [4]

Technology

Light coding technology

PrimeSense's depth acquisition was enabled by "light coding" technology. The process coded the scene with near-IR light, light that returns distorted depending upon where things are (Structured light). The solution then used a standard off-the-shelf CMOS image sensor to read the coded light back from the scene using various algorithms to triangulate and extract the 3D data. The product analysed scenery in 3 dimensions with software, so that devices could interact with users. [5] [6]

Products

PrimeSense system on a chip (SoC)

The CMOS image sensor worked with the visible video sensor to enable the depth map provided by PrimeSense SoC's Carmine (PS1080) and Capri (PS1200) to be merged with the color image. The SoCs performed a registration process so the color image (RGB) and depth (D) information was aligned properly. [7] The light coding infrared patterns were deciphered in order to produce a VGA size depth image of a scene. It delivered visible video, depth, and audio information in a synchronized fashion via the USB 2.0 interface. The SoC had minimal CPU requirements as all depth acquisition algorithms ran on the SoC itself.

PrimeSense sensors

  • Carmine 1.08
  • Carmine 1.09 (short range)
  • Capri 1.25 (embedded)

PrimeSense embedded its technology in its own sensors, the Carmine 1.08 and Carmine 1.09. Capri 1.25, touted by the company as the world's smallest 3D sensor, debuted at International CES 2013. [8]

PrimeSense middleware

PrimeSense developed NiTE middleware which analyzed the data from hardware and modules for OpenNI providing gesture and skeleton tracking. They were released only as binaries. [9] According to the NiTE LinkedIn page: "Including computer vision algorithims, NiTE identifies users and tracks their movements, and provides the framework API for implementing natural-interaction UI controls based on gestures." [10] The system could then interpret specific gestures, making completely hands-free control of electronic devices a reality. [1] Including:

  • Identification of people their body properties, movements and gestures
  • Classification of objects such as furniture
  • Location of walls and floor [6]

Markets

PrimeSense's original focus was on the gaming and living room markets, [11] but expanded to include:

Television

PC and mobile

Interactive displays

Retail

Robotics

Healthcare

Timeline

Partners

PrimeSense was a founding member of OpenNI, an industry-led, non-profit organization formed to certify and promote the compatibility and interoperability of Natural Interaction (NI) devices, applications and middleware. The original OpenNI project was shut down by Apple when they bought the open source software, but Occipital kept a forked version of OpenNI 2 active as an open source software for the SDK for their Structure Product.[ citation needed ]

The company provided the 3D sensing technology for the first Kinect, previously known as Project Natal. [25] [26]

Awards

The company was selected by MIT Technology Review magazine as one of world's 50 most innovative companies for 2011. [27]

PrimeSense won Design Team of the Year in EE Times 2011 Annual Creativity in Electronics (ACE) [28]

PrimeSense was honored as a World Economic Forum Technology Pioneer in 2013. [29]

Related Research Articles

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.

<span class="mw-page-title-main">Motion controller</span> Video game controller that tracks motions

In computing, a motion controller is a type of input device that uses accelerometers, gyroscopes, cameras, or other sensors to track motion.

ZCam is a brand of time-of-flight camera products for video applications by Israeli developer 3DV Systems. The ZCam supplements full-color video camera imaging with real-time range imaging information, allowing for the capture of video in 3D.

Canesta was a fabless semiconductor company that was founded in April, 1999, by Cyrus Bamji, Abbas Rafii, and Nazim Kareemi.

<span class="mw-page-title-main">Time-of-flight camera</span> Range imaging camera system

A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.

Sony Depthsensing Solutions SA/NV, formerly known as SoftKinetic, is a Belgian company founded by Eric Krzeslo, Thibaud Remacle, Gilles Pinault and Xavier Baele which develops gesture recognition hardware and software for real-time range imaging (3D) cameras. It was founded in July 2007. SoftKinetic provides gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health & fitness, and serious game industries. SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

<span class="mw-page-title-main">Kinect</span> Motion-sensing input device for the Xbox 360 and Xbox One

Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.

BigPark was a Canadian video game developer owned by Microsoft Studios.

A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.

OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices.

<span class="mw-page-title-main">Leap Motion</span> Former American company

Leap Motion, Inc. was an American company that manufactured and marketed a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching. In 2016, the company released new software designed for hand tracking in virtual reality. The company was sold to the British company Ultrahaptics in 2019, which rebranded the two companies under the new name Ultraleap.

<span class="mw-page-title-main">Omek Interactive</span> Former technology company

Omek Interactive was a venture-backed technology company developing advanced motion-sensing software for human-computer interaction. Omek was co-founded in 2007 by Janine Kutliroff and Gershom Kutliroff.

Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim a Microsoft Research PhD and also a PhD Student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.

<span class="mw-page-title-main">IllumiRoom</span> Microsoft research Project

IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.

<span class="mw-page-title-main">Tango (platform)</span> Mobile computer vision platform for Android developed by Google

Tango was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.

<span class="mw-page-title-main">Intel RealSense</span> Suite of depth and tracking technologies

Intel RealSense Technology, formerly known as Intel Perceptual Computing, is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.

The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.

The Augmented Reality Sandtable (ARES) is an interactive, digital sand table that uses augmented reality (AR) technology to create a 3D battlespace map. It was developed by the Human Research and Engineering Directorate (HRED) at the Army Research Laboratory (ARL) to combine the positive aspects of traditional military sand tables with the latest digital technologies to better support soldier training and offer new possibilities of learning. It uses a projector to display a topographical map on top of the sand in a regular sandbox as well as a motion sensor that keeps track of changes in the layout of the sand to appropriately adjust the computer-generated terrain display.

The Azure Kinect DK is a discontinued developer kit and PC peripheral which employs the use of artificial intelligence sensors for computer vision and speech models, and is connected to the Microsoft Azure cloud. It is the successor to the Microsoft Kinect line of sensors.

References

  1. 1 2 "PrimeSense - MIT Technology Review". 2.technologyreview.com. Retrieved 3 August 2018.
  2. "Beyond Kinect, PrimeSense wants to drive 3D sensing into more everyday consumer gear". Venturebeat.com. 20 January 2013. Retrieved 3 August 2018.
  3. Hester, Blake (January 14, 2020). "All the money in the world couldn't make Kinect happen". Polygon . Retrieved January 16, 2020.
  4. "Apple Confirms Acquisition of 3-D Sensor Startup PrimeSense". Allthingsd.com. Retrieved 3 August 2018.
  5. "PrimeSense: Motion Control Beyond the Kinect". Forwardthinking.pcmag.com. Retrieved 3 August 2018.
  6. 1 2 "Technology - PrimeSense". Archived from the original on 2013-11-02. Retrieved 2013-02-26.
  7. "How Microsoft's PrimeSense-based Kinect Really Works". Electronicdesign.com. 16 March 2011. Retrieved 3 August 2018.
  8. "PrimeSense shows off tiny Capri sensor, yearns for 3D-sensing future (hands-on)". Engadget.com. Retrieved 3 August 2018.
  9. "PrimeSenseNite - Debian Wiki". Wiki.debian.org. Retrieved 3 August 2018.
  10. "NiTE Middleware | LinkedIn". Archived from the original on 2013-04-08.
  11. Perlroth, Nicole. "For PrimeSense, Microsoft's Kinect Is Just the Beginning". Forbes.com. Retrieved 3 August 2018.
  12. "Asus, PrimeSense Reveals Motion Sensing for PC". Tomshardware.com. 3 January 2011. Retrieved 3 August 2018.
  13. "Official Structure Sensor Store - Give Your iPad 3D Vision". structure.io. Retrieved 3 August 2018.
  14. www.firecube-multimedia.com/lang/en Archived 2015-01-08 at the Wayback Machine
  15. "BEAM By EyeClick - Interactive Gaming Projector System". BEAM By EyeClick. Retrieved 3 August 2018.
  16. "ViiMotion - Interactive Software Device by Covii". Vimeo. Retrieved 3 August 2018.
  17. "New Ayotle AnyTouch 3D Tactile Touchscreen - 3D Visualization World". Archived from the original on 2016-03-07. Retrieved 2013-02-26.
  18. "Finger-precise hand gesture tracking - Remote Presence Magazine". Remotepresence.org. 18 October 2012. Retrieved 3 August 2018.
  19. "Bodymetrics pods scan customers' bodies to get their clothing measurements". Gizmag.com. 26 October 2011. Retrieved 3 August 2018.
  20. "Matterport". Angel.co. Retrieved 3 August 2018.
  21. "CRIIF | Visionary Excellence". Archived from the original on 2015-04-14. Retrieved 2013-02-26.
  22. "iRobot reorganizes, forms new unit focused on Ava and other emerging technologies". Engadget.com. Retrieved 3 August 2018.
  23. "Microsoft names Israel's PrimeSense as Project Natal partner - Globes". Globes. 31 March 2010. Retrieved 3 August 2018.
  24. Takahashi, Dean (13 January 2011). "PrimeSense raises round for motion-control chips". Reuters.com. Retrieved 3 August 2018.
  25. Gohring, Nancy (29 July 2010). "Mundie: Microsoft's Research Depth Enabled Kinect". PC World. Retrieved 2 August 2010.
  26. Corp., Microsoft (10 March 2010). "PrimeSense Supplies 3-D-Sensing Technology to "Project Natal" for Xbox 360". Microsoft . Retrieved 2 August 2010.
  27. "The 50 Smartest Companies of 2017 might not be what you think". Technologyreview.com. Retrieved 3 August 2018.
  28. "EE Times honors 2011 ACE Award winners". Archived from the original on 2012-10-14. Retrieved 2013-02-26.
  29. "Technology Pioneers 2013". Reports.weforum.org. Retrieved 3 August 2018.