Founded | 1986 |
---|---|
Founder | Canadians Vincent John Vincent and Francis MacDougall |
Headquarters | Silicon Valley, California |
Website | gesturetek.com |
GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia. [1]
Founded in 1986 by Canadians Vincent John Vincent [2] and Francis MacDougall, [3] this privately held company develops and licenses gesture recognition software based on computer vision techniques. The partners invented video gesture control in 1986 and received their base patent in 1996 for the GestPoint video gesture control system. GestPoint technology is a camera-enabled video tracking software system that translates hand and body movement into computer control. [4] The system enables users to navigate and control interactive multi-media and menu-based content, engage in virtual reality game play, experience immersion in an augmented reality environment or interact with a consumer device (such a television, mobile phone or set top box) without using touch-based peripherals. [5] [6] [7] Similar companies include gesture recognition specialist LM3LABS based in Tokyo, Japan.
GestureTek's gesture interface applications include multi-touch and 3D camera tracking. GestureTek's multi-touch technology powers the multi-touch table in Melbourne's Eureka Tower. [8] A GestureTek multi-touch table with object recognition is found at the New York City Visitors Center. [9] Telefónica has a multi-touch window with technology from GestureTek. [10] GestureTek's 3D tracking technology is used in a 3D television prototype from Hitachi and various digital signage and display solutions based on 3D interaction. [11]
GestureTek currently has 8 patents awarded, including: 5,534,917 [12] (Video Gesture Control Motion Detection); 7,058,204 [13] (Multiple Camera Control System, Point to Control Base Patent); 7,421,093 [14] (Multiple Camera Tracking System for Interfacing With an Application); 7,227,526 [15] (Stereo Camera Control, 3D-Vision Image Control System); 7,379,563 [16] (Two-Handed Movement Tracker Tracking Bi-Manual Movements); 7,379,566 [17] (Optical Flow-Based Tilt Sensor For Phone Tilt Control); 7,389,591 [18] (Phone Tilt for Typing & Menus/Orientation-Sensitive Signal Output); 7,430,312 [19] (Five Camera 3D Face Capture).
GestureTek's software and patents have been licensed by Microsoft for the Xbox 360, [20] Sony for the EyeToy, [21] NTT DoCoMo for their mobile phones [22] and Hasbro for the ION Educational Gaming System. [23] In addition to software provision, GestureTek also fabricates interactive gesture control display systems with natural user interface for interactive advertising, games and presentations. [24]
In addition, GestureTek's natural user interface virtual reality system has been the subject of research by universities and hospitals for its application in both physical therapy [25] and physical rehabilitation. [26]
In 2008, GestureTek received the Mobile Innovation Global Award [27] from the GSMA for its software-based, gesture-controlled user interface for mobile games and applications. The technology is used by Java platform integration providers [28] and mobile developers. [29] Katamari Damacy is one example of a gesture control mobile game powered by GestureTek software. [30]
Other companies in the industry of interactive projections for marketing and retail experiences include Po-motion Inc., [31] Touchmagix [32] and LM3LABS. [33]
A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer on a display, which allows a smooth control of the graphical user interface of a computer.
A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.
A touchscreen is a type of display that can detect touch input from a user. It consists of both an input device and an output device. The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices.
In computer displays, filmmaking, television production, and other kinetic displays, scrolling is sliding text, images or video across a monitor or display, vertically or horizontally. "Scrolling," as such, does not change the layout of the text or pictures but moves the user's view across what is apparently a larger image that is not wholly seen. A common television and movie special effect is to scroll credits, while leaving the background stationary. Scrolling may take place completely without user intervention or, on an interactive device, be triggered by touchscreen or a keypress and continue without further intervention until a further user action, or be entirely controlled by input devices.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
The Technology and Engineering Emmy Awards, or Technology and Engineering Emmys, are one of two sets of Emmy Awards that are presented for outstanding achievement in engineering development in the television industry. The Technology and Engineering Emmy Awards are presented by the National Academy of Television Arts and Sciences (NATAS), while the separate Primetime Engineering Emmy Awards are given by its sister organization the Academy of Television Arts & Sciences (ATAS).
A wired glove is an input device for human–computer interaction worn like a glove.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
Katamari Damacy Mobile was a video game developed by Namco Bandai Games for the Mitsubishi P904i series of mobile phones for NTT Docomo. It was released in June 2007 in Japan. It is a spin-off of the Katamari Damacy series, the second game on a handheld game console and the third game produced without the involvement of series creator Keita Takahashi.
Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.
Microsoft PixelSense was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).
Sony Depthsensing Solutions SA/NV, formerly known as SoftKinetic Systems, is a Belgian company originating from the merger of Optrima NV, founded by André Miodezky, Maarten Kuijk, Daniël Van Nieuwenhove, Ward Van der Tempel, Riemer Grootjans and Tomas Van den Hauwe and SoftKinetic SA founded by Eric Krzeslo, Thibaud Remacle, Gilles Pinault and Xavier Baele. Sony Depthsensing Solutions develops gesture recognition hardware and software for real-time range imaging (3D) cameras. SoftKinetic was founded in July 2007 providing gesture recognition solutions based on its technology to the interactive digital entertainment, consumer electronics, health & fitness, and serious game industries. SoftKinetic technology has been applied to interactive digital signage and advergaming, interactive television, and physical therapy.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim, a Microsoft Research PhD and also a PhD student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture-controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.
uSens, Inc. is a Silicon Valley startup founded in 2014 in San Jose, California. The company builds interactive computer-vision tracking solutions. The uSens team has extensive experience in artificial intelligence (AI), computer vision, 3D Human–computer interaction (HCI) technology and augmented reality and virtual reality. uSens has been applying computer vision and AI technologies in AR/VR, Automotive and smartphones.
Force Touch is a haptic pressure-sensing technology developed by Apple Inc. that enables trackpads and touchscreens to sense the amount of force being applied to their surfaces. Software that uses Force Touch can distinguish between various levels of force for user interaction purposes. Force Touch was first unveiled on September 9, 2014, during the introduction of Apple Watch. Starting with the Apple Watch, Force Touch has been incorporated into many Apple products, including MacBooks and the Magic Trackpad 2.
Patrick Piemonte is an American inventor, computer scientist and user interface designer best known for his contributions to the iPhone and iPad at Apple. He is listed as an inventor on over 180 patents.
Daniel Wigdor is a Canadian computer scientist, entrepreneur, investor, expert witness and author. He is the associate chair of Industrial Relations as well as a professor in the Department of Computer Science at the University of Toronto.