The factual accuracy of parts of this article (those related to "should make clear that company has been completely integrated into Intel and no products are publicly available any more") may be compromised due to out-of-date information.(August 2014) |
Company type | Private |
---|---|
Industry | Computer software |
Founded | 2006 |
Founder | Janine Kutliroff, Gershom Kutliroff, Shai Yagur |
Headquarters | Israel |
Products | Beckon Development Suite Grasp Development Suite |
Website | www.omekinteractive.com |
Omek Interactive was a venture-backed technology company developing advanced motion-sensing software for human-computer interaction. Omek was co-founded in 2006 by Janine Kutliroff and Gershom Kutliroff. [1]
Omek Interactive is an Israeli company that develops gesture recognition and motion tracking software for use in combination with 3D depth sensor cameras. Omek’s middleware is sensor-independent, supporting multiple cameras including those based on a Structured light and Time-of-flight camera technology. Omek's software works with the following cameras: PrimeSense-based Microsoft Kinect, PMD Technologies CamCube, SoftKinetic DepthSense, and Panasonic D-Imager.
In July 2011 Intel Capital led their Round C financing with $7 million. Among the investors was Eliyahu Haddad who invested $2 million. Mr. Haddad was also given a seat on the Board. [2]
Intel confirmed, that it acquired Omek July 16, 2013. [3]
Omek’s flagship product is the Beckon Development Suite, [4] which converts raw depth data from 3D cameras and turns it into intelligence about humans in the scene, through background subtraction, joints tracking, skeleton identification, and gesture recognition. The Beckon software solution includes the Gesture Authoring Tool, [4] a machine learning tool that enables developers to create gestures without writing any code. Beckon is no longer available as a free, non-commercial download from the Omek website. [5]
In March 2012, at the Embedded Vision Alliance Summit, [6] Omek announced the upcoming availability of their Grasp Development Suite. [5] Grasp focuses on close-range, hand and finger tracking and gesture recognition at distances of 1 meter and less. At the same event Omek also announced support for Texas Instruments’ BeagleBoard-xM evaluation board, a low-cost, low-power, embedded computing platform. [5]
Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environment vehicle guidance.
Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In films, television shows and video games, motion capture refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
A stereo camera is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making stereoviews and 3D pictures for movies, or for range imaging. The distance between the lenses in a typical stereo camera is about the distance between one's eyes and is about 6.35 cm, though a longer base line produces more extreme 3-dimensionality.
Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.
The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.
The PlayStation Eye is a digital camera device, similar to a webcam, for the PlayStation 3. The technology uses computer vision and gesture recognition to process images taken by the camera. This allows players to interact with games using motion and color detection as well as sound through its built-in microphone array. It is the successor to the EyeToy for the PlayStation 2, which was released in 2003.
ZCam is a brand of time-of-flight camera products for video applications by Israeli developer 3DV Systems. The ZCam supplements full-color video camera imaging with real-time range imaging information, allowing for the capture of video in 3D.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
Sony Depthsensing Solutions SA/NV, formerly known as SoftKinetic Systems, is a Belgian company originating from the merger of Optrima NV, founded by André Miodezky, Maarten Kuijk, Daniël Van Nieuwenhove, Ward Van der Tempel, Riemer Grootjans and Tomas Van den Hauwe and SoftKinetic SA founded by Eric Krzeslo, Thibaud Remacle, Gilles Pinault and Xavier Baele. Sony Depthsensing Solutions develops gesture recognition hardware and software for real-time range imaging (3D) cameras. SoftKinetic was founded in July 2007 providing gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health & fitness, and serious game industries. SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia.
Kinect is a discontinued line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.
In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, computer mice, scanners, cameras, joysticks, and microphones.
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices.
Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim, a Microsoft Research PhD and also a PhD student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture-controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.
Intel RealSense Technology, formerly known as Intel Perceptual Computing, is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.
PMD Technologies is a developer of CMOS semiconductor 3D time-of-flight (ToF) components and a provider of engineering support in the field of digital 3D imaging. The company is named after the Photonic Mixer Device (PMD) technology used in its products to detect 3D data in real time. The corporate headquarters of the company is located in Siegen, Germany.
Gestigon is a software development company founded in September 2011, to develop software for gesture control and body tracking based on 3D depth data.