Omek Interactive

Last updated
Omek Interactive
Type Private
IndustryComputer software
Founded2006
FounderJanine Kutliroff,
Gershom Kutliroff,
Shai Yagur
Headquarters Israel
ProductsBeckon Development Suite
Grasp Development Suite
Website www.omekinteractive.com

Omek Interactive was a venture-backed technology company developing advanced motion sensing software for human-computer interaction. Omek was co-founded in 2007 by Janine Kutliroff and Gershom Kutliroff. [1]

Contents

Company overview

Omek Interactive is an Israeli company that develops gesture recognition and motion tracking software for use in combination with 3D depth sensor cameras. Omek’s middleware is sensor-independent, supporting multiple cameras including those based on a Structured light and Time-of-flight camera technology. Omek's software works with the following cameras: PrimeSense-based Microsoft Kinect, PMD Technologies CamCube, SoftKinetic DepthSense, and Panasonic D-Imager.

In July 2011 Intel Capital led their Round C financing with $7 million. Among the investors was Eliyahu Haddad who invested $2 million. Mr. Haddad was also given a seat on the Board. [2]

Intel confirmed, that it acquired Omek July 16, 2013. [3]

Technology

Omek’s flagship product is the Beckon Development Suite, [4] which converts raw depth data from 3D cameras and turns it into intelligence about humans in the scene, through background subtraction, joints tracking, skeleton identification, and gesture recognition. The Beckon software solution includes the Gesture Authoring Tool, [4] a machine learning tool that enables developers to create gestures without writing any code. Beckon is no longer available as a free, non-commercial download from the Omek website. [5]

In March 2012, at the Embedded Vision Alliance Summit, [6] Omek announced the upcoming availability of their Grasp Development Suite. [5] Grasp focuses on close-range, hand and finger tracking and gesture recognition at distances of 1 meter and less. At the same event Omek also announced support for Texas InstrumentsBeagleBoard-xM evaluation board, a low-cost, low-power, embedded computing platform. [5]

See also

Related Research Articles

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a better bridge between machines and humans than older text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

<span class="mw-page-title-main">Wired glove</span> Input device for human–computer interaction

A wired glove is an input device for human–computer interaction worn like a glove.

<span class="mw-page-title-main">Physical computing</span>

Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.

The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">PlayStation Eye</span> Digital camera device for the PlayStation 3

The PlayStation Eye is a digital camera device, similar to a webcam, for the PlayStation 3. The technology uses computer vision and gesture recognition to process images taken by the camera. This allows players to interact with games using motion and color detection as well as sound through its built-in microphone array. It is the successor to the EyeToy for the PlayStation 2, which was released in 2003.

<span class="mw-page-title-main">Microsoft PixelSense</span> Interactive surface computing platform by Microsoft

Microsoft PixelSense was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).

ZCam is a Chinese camera company with products manufactured in China and parts supplied from the controversial Chinese company Huawei. ZCam is owned by Chinese parent company Shenzhen ImagineVision Technology Limited. ZCam is also a brand of time-of-flight camera products for video applications by Israeli developer 3DV Systems. The ZCam supplements full-color video camera imaging with real-time range imaging information, allowing for the capture of video in 3D.

<span class="mw-page-title-main">Time-of-flight camera</span> Range imaging camera system

A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.

Sony Depthsensing Solutions SA/NV, formerly known as SoftKinetic, is a Belgian company founded by Eric Krzeslo and Thibaud Remacle which develops gesture recognition hardware and software for real-time range imaging (3D) cameras. It was founded in July 2007. SoftKinetic provides gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health & fitness, and serious game industries. SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.

<span class="mw-page-title-main">Kinect</span> Motion-sensing input device for the Xbox 360 and Xbox One

Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.

<span class="mw-page-title-main">Input device</span> Provides data and signals to a computer

In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, mouse, scanners, cameras, joysticks, and microphones.

<span class="mw-page-title-main">PrimeSense</span> Former Israeli company

PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.

OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices.

<span class="mw-page-title-main">Leap Motion</span> Former American company

Leap Motion, Inc. was an American company that manufactured and marketed a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching. In 2016, the company released new software designed for hand tracking in virtual reality. The company was sold to the British company Ultrahaptics in 2019, which rebranded the two companies under the new name Ultraleap.

Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim a Microsoft Research PhD and also a PhD Student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.

<span class="mw-page-title-main">Tango (platform)</span> Mobile computer vision platform for Android developed by Google

Tango was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.

<span class="mw-page-title-main">Intel RealSense</span> Suite of depth and tracking technologies

Intel RealSense Technology is a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel are used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.

Gestigon is a software development company founded in September 2011 by Sascha Klement, Erhardt Barth, and Thomas Martinetz. The company develops software for gesture control and body tracking based on 3D depth data.

References

  1. Speaker Profiles (2012-06-11). "Kishor – Professional Jewish Women - Speaker Profiles". Professionaljewishwomen.org. Retrieved 2012-09-15.
  2. "Gesture recognition co Omek Interactive raises $7m". Globes. 2011-07-13. Retrieved 2012-09-15.
  3. Reisinger, Don. "Intel buys Omek Interactive for $40M -- report". CNET.
  4. 1 2 "Omek Beckon Development Suite" (PDF). Omekinteractive.com. Retrieved 2012-09-29.
  5. 1 2 3 "Intel | Data Center Solutions, IoT, and PC Innovation". Intel.
  6. "Omek Interactive's Beckon: Gesture Interfaces Now On The Texas Instruments-Based BeagleBoard-xM". www.embedded-vision.com. 2012-06-21. Retrieved 2012-09-15.