Developer(s) | Microsoft, Samsung |
---|---|
Initial release | Microsoft Surface 1.0 (April 17 [1] 2008) |
Stable release | Hardware: Samsung SUR40 with Microsoft PixelSense (2012) Software: Microsoft Surface 2.0 (2011) |
Operating system | Microsoft Surface 1.0: Windows Vista (32-bit) Samsung SUR40 with Microsoft PixelSense: Windows 7 Professional for Embedded Systems (64-bit) |
Platform | Microsoft Surface 1.0: Microsoft Surface 1.0 Samsung SUR40 with Microsoft PixelSense: Microsoft Surface 2.0 |
Available in | English, Danish, German, Spanish, French, Italian, Korean, Norwegian, Dutch, Swedish |
Website | www.pixelsense.com |
Microsoft PixelSense (formerly called Microsoft Surface) was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).
Microsoft Surface 1.0, the first version of PixelSense, was announced on May 29, 2007, at the D5 Conference. [2] It shipped to customers in 2008 as an end-to-end solution with Microsoft producing and selling the combined hardware/software platform. It is a 30-inch (76 cm) 4:3 rear projection display (1024×768) with an integrated PC and five near-infrared (IR) cameras that can see fingers and objects placed on the display. The display is placed in a horizontal orientation, giving it a table-like appearance. The product and its applications are designed so that several people can approach the display from all sides to simultaneously share and interact with digital content. The cameras’ vision capabilities enable the product to see a near-IR image of what’s placed on the screen, captured at approximately 60 times per second. The Surface platform processing identifies three types of objects touching the screen: fingers, tags, and blobs. Raw vision data is also available and can be used in applications. The device is optimized to recognize 52 simultaneous multitouch points of contact. Microsoft Corporation produced the hardware and software for the Microsoft Surface 1.0 product. Sales of Microsoft Surface 1.0 were discontinued in 2011 in anticipation of the release of the Samsung SUR40 for Microsoft Surface and the Microsoft Surface 2.0 software platform.
Microsoft and Samsung partnered to announce the current version of PixelSense, the Samsung SUR40 for Microsoft Surface (“SUR40”), at the Consumer Electronics Show (CES) in 2011. [3] Samsung began shipping the new SUR40 hardware with the Microsoft Surface 2.0 software platform to customers in early 2012.
The Samsung SUR40 is a 40-inch (102 cm) 16:9 LED backlit LCD (1920×1080) with integrated PC and PixelSense technology, which replaces the cameras in the previous product. PixelSense technology enables Samsung and Microsoft to reduce the thickness of the product from 22 in (56 cm) to 4 in (10 cm). The size reduction enables the product to be placed horizontally, and adds the capability to be mounted vertically while retaining the ability to recognize fingers, tags, blobs and utilize raw vision data.
PixelSense is designed primarily for use by commercial customers to use in public settings. People interact with the product using direct touch interactions and by placing objects on the screen. Objects of a specific size and shape, or with tag patterns, can be uniquely identified to initiate a preprogrammed response by the computer. The device does not require the use of a traditional PC mouse or keyboard, and generally does not require training or foreknowledge to operate. Additionally, the system is designed to interact with several people at the same time so that content can be shared without the limitations of a single-user device. These combined characteristics place the Microsoft Surface platform in the category of so-called natural user interface (NUI), the apparent successor to the graphical user interface (GUI) systems popularized in the 1980s and 1990s.
Microsoft states that sales of PixelSense are targeted toward the following industry verticals: retail, media and entertainment, healthcare, financial services, education, and government. PixelSense is available for sale in over 40 countries, including United States, Canada, Austria, Belgium, Denmark, France, Germany, Ireland, Italy, Norway, Netherlands, Qatar, Saudi Arabia, Spain, Sweden, Switzerland, United Arab Emirates (UAE), United Kingdom (UK), Australia, Korea, India, Singapore, and Hong Kong.
The idea for the product was initially conceptualized in 2001 by Steven Bathiche of Microsoft Hardware and Andy Wilson of Microsoft Research. [4]
In October 2001, DJ Kurlander, Michael Kim, Joel Dehlin, Bathiche and Wilson formed a virtual team to bring the idea to the next stage of development.
In 2003, the team presented the idea to the Microsoft Chairman Bill Gates, in a group review. Later, the virtual team was expanded and a prototype nicknamed T1 was produced within a month. The prototype was based on an IKEA table with a hole cut in the top and a sheet of architect vellum used as a diffuser. The team also developed some applications, including pinball, a photo browser, and a video puzzle. Over the next year, Microsoft built more than 85 prototypes. The final hardware design was completed in 2005.
A similar concept was used in the 2002 science fiction movie Minority Report . As noted in the DVD commentary, the director Steven Spielberg stated the concept of the device came from consultation with Microsoft during the making of the movie. One of the film's technology consultant's associates from MIT later joined Microsoft to work on the project. [5]
The technology was unveiled as under the "Microsoft Surface" name by Microsoft CEO Steve Ballmer on May 30, 2007, at The Wall Street Journal's 'D: All Things Digital' conference in Carlsbad, California. [6] Surface Computing is part of Microsoft's Productivity and Extended Consumer Experiences Group, which is within the Entertainment & Devices division. The first few companies slated to deploy it were Harrah's Entertainment, Starwood, T-Mobile and a distributor, International Game Technology. [7]
On April 17, 2008, AT&T became the first retailer to sell the product. [8] In June 2008 Harrah’s Entertainment launched Microsoft Surface at Rio iBar [9] and Disneyland launched it in Tomorrowland, Innoventions Dream Home. [10] On August 13, 2008, Sheraton Hotels introduced it in their hotel lobbies at 5 locations. [11] On September 8, 2008, MSNBC began using it to work with election maps for the 2008 U.S. Presidential Election on air.
On June 18, 2012, the product was re-branded under the name "Microsoft PixelSense" as a result of the company adopting the Surface brand for its newly unveiled series of tablet PCs. [12] The Samsung SUR40 was discontinued in 2013. [13]
Microsoft notes four main components being important in the PixelSense interface: direct interaction, multi-touch contact, a multi-user experience, and object recognition.
Direct interaction refers to the user's ability to simply reach out and touch the interface of an application in order to interact with it, without the need for a mouse or keyboard. Multi-touch contact refers to the ability to have multiple contact points with an interface, unlike with a mouse, where there is only one cursor. Multi-user experience is a benefit of multi-touch: several people can orient themselves on different sides of the surface to interact with an application simultaneously. Object recognition refers to the device's ability to recognize the presence and orientation of tagged objects placed on top of it.
The technology allows non-digital objects to be used as input devices. In one example, a normal paint brush was used to create a digital painting in the software. [14] This is made possible by the fact that, in using cameras for input, the system does not rely on restrictive properties required of conventional touchscreen or touchpad devices such as the capacitance, electrical resistance, or temperature of the tool used (see Touchscreen).
In the old technology, the computer's "vision" was created by a near-infrared, 850-nanometer-wavelength LED light source aimed at the surface. When an object touched the tabletop, the light was reflected to multiple infrared cameras with a net resolution of 1024×768, allowing it to sense, and react to items touching the tabletop.
The system ships with basic applications, including photos, music, virtual concierge, and games, that can be customized for the customers. [15]
A feature that comes preinstalled is the "Attract" application, an image of water with leaves and rocks within it. By touching the screen, users can create ripples in the water, much like a real stream. Additionally, the pressure of touch alters the size of the ripple created, and objects placed into the water create a barrier that ripples bounce off, just as they would in a real pond.
The technology used in newer devices allows recognition of fingers, tag, blob, raw data, and objects that are placed on the screen, allowing vision-based interaction without the use of cameras. Sensors in the individual pixels in the display register what is touching the screen.
Microsoft provides the free Microsoft Surface 2.0 Software Development Kit (SDK) for developers to create NUI touch applications for devices with PixelSense and Windows 7 touch PCs.
Applications for PixelSense can be written in Windows Presentation Foundation or XNA. The development process is much like normal Windows 7 development, but custom WPF controls had to be created due to the unique interface of the system. Developers already proficient in WPF can utilize the SDK to write applications for PixelSense for deployments for the large hotels, casinos, and restaurants. [16]
Microsoft Research has published information about a related technology dubbed SecondLight. [17] Still in the research phase, [18] this project augments secondary images onto physical objects on or above the main display.
A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer on a display, which allows a smooth control of the graphical user interface of a computer.
A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to 2D motion, to control a pointer in a graphical user interface on a computer screen. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available, as detached accessories.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.
Desktop Window Manager is the compositing window manager in Microsoft Windows since Windows Vista that enables the use of hardware acceleration to render the graphical user interface of Windows.
A virtual keyboard is a software component that allows the input of characters without the need for physical keys. Interaction with a virtual keyboard happens mostly via a touchscreen interface, but can also take place in a different form when in virtual or augmented reality.
Mobile app development is the act or process by which a mobile app is developed for one or more mobile devices, which can include personal digital assistants (PDA), enterprise digital assistants (EDA), or mobile phones. Such software applications are specifically designed to run on mobile devices, taking numerous hardware constraints into consideration. Common constraints include CPU architecture and speeds, available memory (RAM), limited data storage capacities, and considerable variation in displays and input methods. These applications can be pre-installed on phones during manufacturing or delivered as web applications, using server-side or client-side processing to provide an "application-like" experience within a web browser.
An ultra-mobile PC, or ultra-mobile personal computer (UMPC), is a miniature version of a pen computer, a class of laptop whose specifications were launched by Microsoft and Intel in Spring 2006. Sony had already made a first attempt in this direction in 2004 with its Vaio U series, which was only sold in Asia. UMPCs are generally smaller than subnotebooks, have a TFT display measuring (diagonally) about 12.7 to 17.8 centimetres, are operated like tablet PCs using a touchscreen or a stylus, and can also have a physical keyboard. There is no clear boundary between subnotebooks and ultra-mobile PCs, but UMPCs commonly have major features not found in the common clamshell laptop design, such as small keys on either side of the screen, or a slide-out keyboard.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.
A surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware.
Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.
Kinect is a discontinued line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.
The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).
The history of tablet computers and the associated special operating software is an example of pen computing technology, and thus the development of tablets has deep historical roots. The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1914. The first publicly demonstrated system using a tablet and handwriting recognition instead of a keyboard for working with a modern digital computer dates to 1956.
Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to hardware specifications, devised by Microsoft and announced in 2001, for a pen-enabled personal computer, and running a licensed copy of Windows XP Tablet PC Edition operating system or a derivative thereof.
OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices.
The Aphelion Imaging Software Suite is a software suite that includes three base products - Aphelion Lab, Aphelion Dev, and Aphelion SDK for addressing image processing and image analysis applications. The suite also includes a set of extension programs to implement specific vertical applications that benefit from imaging techniques.
{{cite web}}
: CS1 maint: numeric names: authors list (link)As an epilogue, PixelSense didn't last long, with the Samsung SUR40 getting discontinued in 2013.