TouchLight

Last updated

TouchLight is an imaging touch screen and 3D display for gesture-based interaction. [1] It was developed by Microsoft Research employee Andrew D. Wilson and made known to the public in late 2005. [2] The technology was licensed to Eon Reality in July 2006. [3]

Contents

Abilities

The TouchLight can both record and project simultaneously, and due to its 3D capabilities can be used almost as a mirror. This same principle could be applied to link two TouchLights together allowing two people anywhere in the world to communicate with each other as if they were sitting on opposite sides of the same desk. It can capture a high definition image of anything placed up against the screen. This image is then displayed in 2D. The user of the TouchLight can manipulate the size, position and orientation of the image by performing the corresponding action with his or her hands. The screen has a microphone built in that can detect vibration, thus allowing the user to change setting simply by tapping the screen. [4]

Cost

In 2006, the high end product cost upwards of $60,000. [5]

See also

Related Research Articles

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

Augmented reality View of the real world with computer-generated supplementary features

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

Touchscreen Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an information processing system. The display is often an LCD, AMOLED or OLED display while the system is usually a laptop, tablet, or smartphone. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size.

Handheld projector Image projector in a handheld device

A handheld projector is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface.

Gesture recognition Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices.

In computer user interfaces, a cursor is an indicator used to show the current position for user interaction on a computer monitor or other display device that will respond to input from a text input or pointing device. The mouse cursor is also called a pointer, owing to its resemblance in usage to a pointing stick.

Virtual keyboard Software component

A virtual keyboard is a software component that allows the input of characters without the need for physical keys. The interaction with the virtual keyboard happens mostly via a touchscreen interface, but can also take place in a different form in virtual or augmented reality.

Multi-touch Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

Pen computing Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

Microsoft PixelSense Interactive surface computing platform by Microsoft

Microsoft PixelSense is an interactive surface computing platform that allows one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).

Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

In computing, a natural user interface, or NUI, or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia.

DiamondTouch Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.

Microsoft Tablet PC Microsoft term for some tablet computers

Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to a set of specifications announced in 2001 by Microsoft, for a pen-enabled personal computer, conforming to hardware specifications devised by Microsoft and running a licensed copy of Windows XP Tablet PC Edition operating system or a derivative thereof.

Smudge attack Discerning a password via screen smudges

A smudge attack is an information extraction attack that discerns the password input of a touchscreen device such as a cell phone or tablet computer from fingerprint smudges. A team of researchers at the University of Pennsylvania were the first to investigate this type of attack in 2010. An attack occurs when an unauthorized user is in possession or is nearby the device of interest. The attacker relies on detecting the oily smudges produced and left behind by the user's fingers to find the pattern or code needed to access the device and its contents. Simple cameras, lights, fingerprint powder, and image processing software can be used to capture the fingerprint deposits created when the user unlocks their device. Under proper lighting and camera settings, the finger smudges can be easily detected, and the heaviest smudges can be used to infer the most frequent input swipes or taps from the user.

Project Digits is a Microsoft Research Project under Microsoft's computer science laboratory at the University of Cambridge; researchers from Newcastle University and University of Crete are also involved in this project. Project is led by David Kim a Microsoft Research PhD and also a PhD Student in computer science at Newcastle University. Digits is an input device which can be mounted on the wrist of human hand and it captures and displays a complete 3D graphical representation of the user's hand on screen without using any external sensing device or hand covering material like data gloves. This project aims to make gesture controlled interfaces completely hands free with greater mobility and accuracy. It allows user to interact with whatever hardware while moving from room to room or walking down the street without any line of sight connection with the hardware.

Force Touch Force-sensing touch technology developed by Apple Inc.

Force Touch is a haptic technology developed by Apple Inc. that enables trackpads and touchscreens to distinguish between various levels of force being applied to their surfaces. It uses pressure sensors to add another method of input to Apple's devices. The technology was first unveiled on September 9, 2014, during the introduction of Apple Watch. Starting with the Apple Watch, Force Touch has been incorporated into many products within Apple's lineup. This notably includes MacBooks and the Magic Trackpad 2. The technology is known as 3D Touch on the iPhone models. The technology brings usability enhancements to the software by offering a third dimension to accept input. Accessing shortcuts, previewing details, drawing art and system wide features enable users to additionally interact with the displayed content by applying force on the input surface.

References

  1. Andrew D. Wilson. "TouchLight: an imaging touch screen and display for gesture-based interaction" (PDF). Microsoft Research. Archived from the original (PDF) on 2006-08-29. Retrieved 2006-12-08.
  2. "First look at Microsoft Research Touch-Light (Video)". TheChannel9Team. Channel 9. 2004-08-24. Retrieved 2007-05-19.
  3. "EON Reality Will Be First Company to Bring Next-Generation 3-D Technology to Market". 2006-07-18. Archived from the original on 2006-11-16. Retrieved 2006-12-08.
  4. "Touchlight". CAVI Digital Experience. 2005-12-15. Archived from the original on 2012-03-15. Retrieved 2006-12-10.
  5. Laurie Sullivan (2006-07-19). "Microsoft Licenses 3D TouchLight IP". TechnoWeb. Archived from the original on 2008-07-23.