LM3LABS

Last updated
LM3LABS Corporation
IndustryTechnology, software
FoundedTokyo, Japan, (2004)
Headquarters
Area served
Worldwide
Key people
Yumiko Misaki (CEO), Nicolas Loeillot (COO)
Products Computer vision, sensor, software technology, augmented reality solutions, cloud computing
Website www.lm3labs.com

LM3LABS is a start-up company that develops hardware and software for motion-based control of computers. [1]

Contents

History

LM3Labs was founded in 2004 to commercialize products that use technologies developed in part at the French National Centre for Scientific Research (CNRS). These technologies use finger tracking, gesture interaction, body interaction, and eye and face recognition to provide interaction with computer systems by gestures instead of through hardware such as keyboards and mice. The prototype installation was installed in the executive showroom at the headquarters of mobile phone operator NTT DoCoMo. [2]

Computer interactivity

The technologies developed by LM3LABS combine active and passive gesture recognition with displays to present and control information. [2] One system, called Catchyoo, controls digital signage to track user interaction with advertisements. [3] The company has also introduced AirStrike, which allows touchless gesture control of computers such moving a window or turning a page. [1] They have also demonstrated combining Airstrike with a holographic display to create an interactive hologram. [4]

Related Research Articles

Pointing device Human interface device for computers

A pointing device is a human interface device that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

Augmented reality View of the real world with computer-generated supplementary features

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

Touchscreen Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an information processing system. The display is often an LCD, AMOLED or OLED display while the system is usually a laptop, tablet, or smartphone. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size.

Mixed reality Merging of real and virtual worlds to produce new environments

Mixed reality (MR) is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. Mixed reality does not exclusively take place in either the physical world or virtual world, but is a hybrid of augmented reality and virtual reality. To mark the difference: Augmented reality takes place in the physical world, with information or objects added virtually like an overlay; Virtual Reality immerses you in a fully virtual world without the intervention of the physical world.

Handheld projector Image projector in a handheld device

A handheld projector is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface.

Gesture recognition Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a better bridge between machines and humans than older text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices.

Tangible user interface

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

Multi-touch Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

Pen computing Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

A holographic display is a type of display that utilizes light diffraction to create a virtual three-dimensional image. Holographic displays are distinguished from other forms of 3D displays in that they do not require the aid of any special glasses or external equipment for a viewer to see the image.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

SixthSense Gesture-based wearable computer system

SixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997, and 1998, and further developed by Pranav Mistry, in 2009, both of whom developed both hardware and software for both headworn and neckworn versions of it. It comprises a headworn or neck-worn pendant that contains both a data projector and camera. Headworn versions were built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition.

GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia.

Leap Motion Former American company

Leap Motion, Inc. was an American company that manufactured and marketed a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching. In 2016, the company released new software designed for hand tracking in virtual reality. The company was sold to the British company Ultrahaptics in 2019, which sells the product under the brand name Ultraleap.

Optical head-mounted display Type of wearable device

An optical head-mounted display (OHMD) is a wearable device that has the capability of reflecting projected images as well as allowing the user to see through it, similar to augmented reality technology. OHMD technology has existed since 1997 in various forms, but despite a number of attempts from industry, has yet to be commercialised.

Crunchfish is a Swedish technology company in Malmö that develops gesture recognition software for the mobile phone and tablet market. Crunchfish was founded in 2010 with an initial focus to create innovations for the iOS and Android app markets. Gesture recognition using a standard webcam as main gesture sensor was one of their core innovations and the company is now focusing on touchless interaction based on camera based gestures. In 2013, April, the company was selected a '2013 Red Herring Top 100' company by Red Herring (magazine). Crunchfish produces gesture sensing software, a set of customized mid-air gesture recognition solutions, named A3D™, to global mobile device manufacturers and app developers. Crunchfish cooperates with smartphone manufacturers to enable Crunchfish gesture sensing technology in their partners mobile devices. Crunchfish developed the touchless functions in Chinese Gionee's smartphone Elife E6, launched in China, July, 2013 and in India and Africa in August, 2013

The Human Media Lab(HML) is a research laboratory in Human-Computer Interaction at Queen's University's School of Computing in Kingston, Ontario. Its goals are to advance user interface design by creating and empirically evaluating disruptive new user interface technologies, and educate graduate students in this process. The Human Media Lab was founded in 2000 by Prof. Roel Vertegaal and employs an average of 12 graduate students.

Smartglasses Wearable computers glasses

Smartglasses or smart glasses are wearable computer glasses that add information alongside or to what the wearer sees. Alternatively, smartglasses are sometimes defined as wearable computer glasses that are able to change their optical properties at runtime. Smart sunglasses which are programmed to change tint by electronic means are an example of the latter type of smartglasses.

Microsoft HoloLens Mixed reality smartglasses

Microsoft HoloLens, known under development as Project Baraboo, are a pair of mixed reality smartglass developed and manufactured by Microsoft. HoloLens was the first head-mounted display running the Windows Mixed Reality platform under the Windows 10 computer operating system. The tracking technology used in HoloLens can trace its lineage to Kinect, an add-on for Microsoft's Xbox game console that was introduced in 2010.

Gestigon is a software development company founded in September 2011 by Sascha Klement, Erhardt Barth, and Thomas Martinetz. The company develops software for gesture control and body tracking based on 3D depth data.

References

  1. 1 2 Mollman,Steve "Does touchless tech point the way ahead?", CNN, 11 September 2008, accessed on 12 November 2013
  2. 1 2 Chambre de Commerce at d'Industrie Francaise du Japan, "LM3LABS inscrit l'interactivité dans notre quotidien" Archived 2013-11-12 at the Wayback Machine , CCIFJ News, 21 May 2012, accessed on 12 November 2013
  3. Sheng, Ellen "Billboards, Store Displays Get Digital", Wall Street Journal, Eastern Edition, 26 October 2005
  4. Ricker, Thomas, "Video: LM3Labs' AirStrike interactive holograms, because they can", Engadget, 22 April 2008, accessed on 12 November 2013