Company type | Subsidiary of Apple Inc. |
---|---|
Industry | Fabless semiconductor |
Founded | 2005 |
Defunct | 2013 |
Fate | Acquired by Apple Inc. |
Headquarters | Israel |
Key people |
|
Products |
|
Parent | Apple Inc. |
Website | primesense |
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
PrimeSense was a fabless semiconductor company and provided products in the area of sensory inputs for consumer and commercial markets.
PrimeSense's technology had been originally applied to gaming but was later applied in other fields. [1] PrimeSense was best known for licensing the hardware design and chip used in Microsoft's Kinect motion-sensing system for the Xbox 360 in 2010. [2] The company had been founded in 2005 to explore depth-sensing cameras which they had demonstrated to developers at the 2006 Game Developers Conference. Microsoft had been looking on its own at 3D camera technology applications to its Xbox line of consoles, and engaged with PrimeSense after the conference to help establish the direction the technology needed to go to make it into consumer-grade products, while Microsoft improved on additional software aspects and incorporated machine learning to help with motion detection. [3]
On November 24, 2013, Apple Inc. confirmed the purchase of PrimeSense for $360 million. [4]
PrimeSense's depth acquisition was enabled by "light coding" technology. The process coded the scene with near-IR light, light that returns distorted depending upon where things are (Structured light). The solution then used a standard off-the-shelf CMOS image sensor to read the coded light back from the scene using various algorithms to triangulate and extract the 3D data. The product analysed scenery in 3 dimensions with software, so that devices could interact with users. [5] [6]
The CMOS image sensor worked with the visible video sensor to enable the depth map provided by PrimeSense SoC's Carmine (PS1080) and Capri (PS1200) to be merged with the color image. The SoCs performed a registration process so the color image (RGB) and depth (D) information was aligned properly. [7] The light coding infrared patterns were deciphered in order to produce a VGA size depth image of a scene. It delivered visible video, depth, and audio information in a synchronized fashion via the USB 2.0 interface. The SoC had minimal CPU requirements as all depth acquisition algorithms ran on the SoC itself.
PrimeSense embedded its technology in its own sensors, the Carmine 1.08 and Carmine 1.09. Capri 1.25, touted by the company as the world's smallest 3D sensor, debuted at International CES 2013. [8]
PrimeSense developed NiTE middleware which analyzed the data from hardware and modules for OpenNI providing gesture and skeleton tracking. They were released only as binaries. [9] According to the NiTE LinkedIn page: "Including computer vision algorithims, NiTE identifies users and tracks their movements, and provides the framework API for implementing natural-interaction UI controls based on gestures." [10] The system could then interpret specific gestures, making completely hands-free control of electronic devices a reality. [1] Including:
PrimeSense's original focus was on the gaming and living room markets, [11] but expanded to include:
PrimeSense was a founding member of OpenNI, an industry-led, non-profit organization formed to certify and promote the compatibility and interoperability of Natural Interaction (NI) devices, applications and middleware. The original OpenNI project was shut down by Apple when they bought the open source software, but Occipital kept a forked version of OpenNI 2 active as an open source software for the SDK for their Structure Product.[ citation needed ]
The company provided the 3D sensing technology for the first Kinect, previously known as Project Natal. [25] [26]
The company was selected by MIT Technology Review magazine as one of world's 50 most innovative companies for 2011. [27]
PrimeSense won Design Team of the Year in EE Times 2011 Annual Creativity in Electronics (ACE) [28]
PrimeSense was honored as a World Economic Forum Technology Pioneer in 2013. [29]
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
In computing, a motion controller is a type of input device that uses accelerometers, gyroscopes, cameras, or other sensors to track motion.
ZCam is a brand of time-of-flight camera products for video applications by Israeli developer 3DV Systems. The ZCam supplements full-color video camera imaging with real-time range imaging information, allowing for the capture of video in 3D.
Canesta was a fabless semiconductor company that was founded in April, 1999, by Cyrus Bamji, Abbas Rafii, and Nazim Kareemi.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
Sony Depthsensing Solutions SA/NV, formerly known as SoftKinetic, is a Belgian company founded by Eric Krzeslo, Thibaud Remacle, Gilles Pinault and Xavier Baele which develops gesture recognition hardware and software for real-time range imaging (3D) cameras. It was founded in July 2007. SoftKinetic provides gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health & fitness, and serious game industries. SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.
Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.
BigPark was a Canadian video game developer owned by Microsoft Studios.
A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.
OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices.
Leap Motion, Inc. was an American company that manufactured and marketed a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching. In 2016, the company released new software designed for hand tracking in virtual reality. The company was sold to the British company Ultrahaptics in 2019, which rebranded the two companies under the new name Ultraleap.
Omek Interactive was a venture-backed technology company developing advanced motion-sensing software for human-computer interaction. Omek was co-founded in 2007 by Janine Kutliroff and Gershom Kutliroff.
Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim a Microsoft Research PhD and also a PhD Student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.
IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.
Tango was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.
Intel RealSense Technology, formerly known as Intel Perceptual Computing, is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.
The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.
The Augmented Reality Sandtable (ARES) is an interactive, digital sand table that uses augmented reality (AR) technology to create a 3D battlespace map. It was developed by the Human Research and Engineering Directorate (HRED) at the Army Research Laboratory (ARL) to combine the positive aspects of traditional military sand tables with the latest digital technologies to better support soldier training and offer new possibilities of learning. It uses a projector to display a topographical map on top of the sand in a regular sandbox as well as a motion sensor that keeps track of changes in the layout of the sand to appropriately adjust the computer-generated terrain display.
The Azure Kinect DK is a discontinued developer kit and PC peripheral which employs the use of artificial intelligence sensors for computer vision and speech models, and is connected to the Microsoft Azure cloud. It is the successor to the Microsoft Kinect line of sensors.