Surface computer

Last updated

A surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware.

Contents

The term "surface computer" was first adopted by Microsoft for its PixelSense (codenamed Milan) interactive platform, which was publicly announced on 30 May 2007. Featuring a horizontally-mounted 30-inch display in a coffee table-like enclosure, users can interact with the machine's graphical user interface by touching or dragging their fingertips and other physical objects such as paintbrushes across the screen, or by setting real-world items tagged with special bar-code labels on top of it. As an example, uploading digital files only requires each object (e.g. a Bluetooth-enabled digital camera) to be placed on the unit's display. The resulting pictures can then be moved across the screen, or their sizes and orientation can be adjusted as well.

PixelSense's internal hardware includes a 2.0GHz Core 2 Duo processor, 2GB of memory, an off the shelf graphics card, a scratch-proof spill-proof surface, a DLP projector, and five infrared cameras to detect touch, unlike the iPhone, which uses a capacitive display. These expensive components resulted in a price tag of between $12,500 to $15,000 for the hardware.

The first PixelSense units were used as information kiosks in the Harrah's family of casinos. Other customers were T-Mobile, for comparing several cell phones side-by-side, and Sheraton Hotels and Resorts, to service lobby customers in numerous ways. [1] [2] These products were originally branded as "Microsoft Surface", but was renamed "Microsoft PixelSense" on June 18, 2012 after the manufacturer adopted the "Surface" name for its new series of tablet PCs.

See also

Related Research Articles

Graphical user interface User interface allowing interaction through graphical icons and visual indicators

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

Pointing device Human input interface

A pointing device is an input interface that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

Hidden-surface determination Visibility in 3D computer graphics

In 3D computer graphics, hidden-surface determination is the process of identifying what surfaces and parts of surfaces can be seen from a particular viewing angle. A hidden-surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden-surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. When referring to line rendering it is known as hidden-line removal. Hidden-surface determination is necessary to render a scene correctly, so that one may not view features hidden behind the model itself, allowing only the naturally viewable portion of the graphic to be visible.

Smart device

A smart device is an electronic device, generally connected to other devices or networks via different wireless protocols such as Bluetooth, Zigbee, NFC, Wi-Fi, LiFi, 5G, etc., that can operate to some extent interactively and autonomously. Several notable types of smart devices are smartphones, smart vehicles , smart thermostats, smart doorbells, smart locks, smart refrigerators, phablets and tablets, smartwatches, smart bands, smart key chains, false and others. The term can also refer to a device that exhibits some properties of ubiquitous computing, including—although not necessarily—artificial intelligence.

Mobile device Small, hand-held computing device

A mobile device is a computer small enough to hold and operate in the hand. Typically, any handheld computer device will have an LCD or OLED flatscreen interface, providing a touchscreen interface with digital buttons and keyboard or physical buttons along with a physical keyboard. Many such devices can connect to the Internet and interconnect with other devices such as car entertainment systems or headsets via Wi-Fi, Bluetooth, cellular networks or near field communication (NFC). Integrated cameras, the ability to place and receive voice and video telephone calls, video games, and Global Positioning System (GPS) capabilities are common. Power is typically provided by a lithium-ion battery. Mobile devices may run mobile operating systems that allow third-party apps specialized for said capabilities to be installed and run.

Touchscreen Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an information processing system. The display is often an LCD or OLED display while the system is usually a laptop, tablet, or smartphone. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size.

Gesture recognition Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at this point will move accordingly. This could make conventional input on devices such and even redundant.

Tangible user interface

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

Tablet computer Mobile computer with integrated display, circuitry and battery, typically shares similarities with smartphones

A tablet computer, commonly shortened to tablet, is a mobile device, typically with a mobile operating system and touchscreen display processing circuitry, and a rechargeable battery in a single, thin and flat package. Tablets, being computers, do what other personal computers do, but lack some input/output (I/O) abilities that others have. Modern tablets largely resemble modern smartphones, the only differences being that tablets are relatively larger than smartphones, with screens 7 inches (18 cm) or larger, measured diagonally, and may not support access to a cellular network.

Jefferson Han

Jefferson Y. "Jeff" Han is a computer scientist who worked for New York University's (NYU) Courant Institute of Mathematical Sciences until 2006. He is one of the main developers of "multi-touch sensing", which, unlike older touch-screen interfaces, is able to recognize multiple points of contact.

Multi-touch Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

Pen computing Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

Microsoft PixelSense

Microsoft PixelSense is an interactive surface computing platform that allows one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).

Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.

In computing, a natural user interface, or NUI, or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned.

DiamondTouch Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

Microsoft Tablet PC Type of tablet computer

Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to a set of specifications announced in 2001 by Microsoft, for a pen-enabled personal computer, conforming to hardware specifications devised by Microsoft and running a licensed copy of Windows XP Tablet PC Edition operating system or a derivative thereof.

Screen–smart device interaction

Screen-Smart Device Interaction (SSI) is fairly new technology developed as a sub-branch of Digital Signage.

The Surface Hub is a brand of interactive whiteboard developed and marketed by Microsoft, as part of the Microsoft Surface family. The Surface Hub is a wall-mounted or roller-stand-mounted device with either a 55-inch (140 cm) 1080p or an 84-inch (210 cm) 4K 120 Hz touchscreen with multi-touch and multi-pen capabilities, running the Windows 10 operating system. The devices are targeted for businesses to use while collaborating and videoconferencing.

The Surface Book 3 is the third generation of Microsoft's Surface Book series, and a successor to the Surface Book 2. Like its previous generation, the Surface Book 3 is part of the Microsoft Surface lineup of personal computers. It is a 2-in-1 PC that can be used like a conventional laptop, or detached from its base for use as a separate tablet, with touch and stylus input support in both scenarios. It was announced by Microsoft online alongside the Surface Go 2 on May 6, 2020, and later released for purchase on May 12, 2020.

References

  1. Grossman, Lev (14 June 2007). "Feeling out the Newest Touch Screens". TIME. Archived from the original on June 18, 2007. Retrieved 2007-06-16.
  2. "Microsoft Launches New Product Category: Surface Computing Comes to Life in Restaurants, Hotels, Retail Locations and Casino Resorts". Microsoft. 29 May 2007. Retrieved 2007-06-16.
  3. Jeff Han