In computing, multi-touch is technology that enables a surface (a touchpad or touchscreen) to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, [1] MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. [2] CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. [3] [4] Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. [5] [6] Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
Several uses of the term multi-touch resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing.
Multi-touch is commonly implemented using capacitive sensing technology in mobile devices and smart devices. A capacitive touchscreen typically consists of a capacitive touch sensor, application-specific integrated circuit (ASIC) controller and digital signal processor (DSP) fabricated from CMOS (complementary metal–oxide–semiconductor) technology. A more recent alternative approach is optical touch technology, based on image sensor technology.
In computing, multi-touch is technology which enables a touchpad or touchscreen to recognize more than one [7] [8] or more than two [9] points of contact with the surface. Apple popularized the term "multi-touch" in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.
The two different uses of the term resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. [10] [11] Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, [11] but they are often used as synonyms in marketing.
The use of touchscreen technology predates both multi-touch technology and the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments. [12] IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, an infrared terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface. These early touchscreens only registered one point of touch at a time. On-screen keyboards (a well-known feature today) were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. [13]
Exceptions to these were a "cross-wire" multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s [14] and the 16 button capacitive multi-touch screen developed at CERN in 1972 for the controls of the Super Proton Synchrotron that were under construction. [15]
In 1976 a new x-y capacitive screen, based on the capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe, was developed at CERN. [1] [17] This technology, allowing an exact location of the different touch points, was used to develop a new type of human machine interface (HMI) for the control room of the Super Proton Synchrotron particle accelerator. [18] [19] [20] In a handwritten note dated 11 March 1972, [21] Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display. The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough (80 μm) and sufficiently far apart (80 μm) to be invisible. [22] In the final device, a simple lacquer coating prevented the fingers from actually touching the capacitors. In the same year, MIT described a keyboard with variable graphics capable of multi-touch detection. [14]
In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. [23] A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass. When a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input. Since the size of a dot was dependent on pressure (how hard the person was pressing on the glass), the system was somewhat pressure-sensitive as well. [12] Of note, this system was input only and not able to display graphics.
In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. [24] In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself. [25] [26]
By 1984, both Bell Labs and Carnegie Mellon University had working multi-touch-screen prototypes – both input and graphics – that could respond interactively in response to multiple finger inputs. [27] [28] The Bell Labs system was based on capacitive coupling of fingers, whereas the CMU system was optical. In 1985, the canonical multitouch pinch-to-zoom gesture was demonstrated, with coordinated graphics, on CMU's system. [29] [30] In October 1985, Steve Jobs signed a non-disclosure agreement to tour CMU's Sensor Frame multi-touch lab. [31] In 1990, Sears et al. published a review of academic research on single and multi-touch touchscreen human–computer interaction of the time, describing single touch gestures such as rotating knobs, swiping the screen to activate a switch (or a U-shaped gesture for a toggle switch), and touchscreen keyboards (including a study that showed that users could type at 25 words per minute for a touchscreen keyboard compared with 58 words per minute for a standard keyboard, with multi-touch hypothesized to improve data entry rate); multi-touch gestures such as selecting a range of a line, connecting objects, and a "tap-click" gesture to select while maintaining location with another finger are also described. [32]
In 1991, Pierre Wellner advanced the topic publishing about his multi-touch "Digital Desk", which supported multi-finger and pinching motions. [33] [34] Various companies expanded upon these inventions in the beginning of the twenty-first century.
Between 1999 and 2005, the company Fingerworks developed various multi-touch technologies, including Touchstream keyboards and the iGesture Pad. in the early 2000s Alan Hedge, professor of human factors and ergonomics at Cornell University published several studies about this technology. [35] [36] [37] In 2005, Apple acquired Fingerworks and its multi-touch technology. [38]
In 2004, French start-up JazzMutant developed the Lemur Input Device, a music controller that became in 2005 the first commercial product to feature a proprietary transparent multi-touch screen, allowing direct, ten-finger manipulation on the display. [39] [40]
In January 2007, multi-touch technology became mainstream with the iPhone, and in its iPhone announcement Apple even stated it "invented multi touch", [41] however both the function and the term predate the announcement or patent requests, except for the area of capacitive mobile screens, which did not exist before Fingerworks/Apple's technology (Fingerworks filed patents in 2001–2005, [42] subsequent multi-touch refinements were patented by Apple [43] ).
However, the U.S. Patent and Trademark office declared that the "pinch-to-zoom" functionality was predicted by U.S. Patent # 7,844,915 [44] [45] relating to gestures on touch screens, filed by Bran Ferren and Daniel Hillis in 2005, as was inertial scrolling, [46] thus invalidated a key claims of Apple's patent.
In 2001, Microsoft's table-top touch platform, Microsoft PixelSense (formerly Surface) started development, which interacts with both the user's touch and their electronic devices and became commercial on May 29, 2007. Similarly, in 2001, Mitsubishi Electric Research Laboratories (MERL) began development of a multi-touch, multi-user system called DiamondTouch.
In 2008, the Diamondtouch became a commercial product and is also based on capacitance, but able to differentiate between multiple simultaneous users or rather, the chairs in which each user is seated or the floorpad on which the user is standing. In 2007, NORTD labs open source system offered its CUBIT (multi-touch).
Small-scale touch devices rapidly became commonplace in 2008. The number of touch screen telephones was expected to increase from 200,000 shipped in 2006 to 21 million in 2012. [47]
In May 2015, Apple was granted a patent for a "fusion keyboard", which turns individual physical keys into multi-touch buttons. [48]
Apple has retailed and distributed numerous products using multi-touch technology, most prominently including its iPhone smartphone and iPad tablet. Additionally, Apple also holds several patents related to the implementation of multi-touch in user interfaces, [49] however the legitimacy of some patents has been disputed. [50] Apple additionally attempted to register "Multi-touch" as a trademark in the United States—however its request was denied by the United States Patent and Trademark Office because it considered the term generic. [51]
Multi-touch sensing and processing occurs via an ASIC sensor that is attached to the touch surface. Usually, separate companies make the ASIC and screen that combine into a touch screen; conversely, a touchpad's surface and ASIC are usually manufactured by the same company. There have been large companies in recent years that have expanded into the growing multi-touch industry, with systems designed for everything from the casual user to multinational organizations.
It is now common for laptop manufacturers to include multi-touch touchpads on their laptops, and tablet computers respond to touch input rather than traditional stylus input and it is supported by many recent operating systems.
A few companies are focusing on large-scale surface computing rather than personal electronics, either large multi-touch tables or wall surfaces. These systems are generally used by government organizations, museums, and companies as a means of information or exhibit display.[ citation needed ]
Multi-touch has been implemented in several different ways, depending on the size and type of interface. The most popular form are mobile devices, tablets, touchtables and walls. Both touchtables and touch walls project an image through acrylic or glass, and then back-light the image with LEDs.
Touch surfaces can also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection. [52]
Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel's electrical field. The disruption is registered as a computer event (gesture) and may be sent to the software, which may then initiate a response to the gesture event. [53]
In the past few years, several companies have released products that use multi-touch. In an attempt to make the expensive technology more accessible, hobbyists have also published methods of constructing DIY touchscreens. [54]
Capacitive technologies include: [55]
Resistive technologies include: [55]
Optical touch technology is based on image sensor technology. It functions when a finger or an object touches the surface, causing the light to scatter, the reflection of which is caught with sensors or cameras that send the data to software that dictates response to the touch, depending on the type of reflection measured.
Optical technologies include: [55]
Acoustic and radio-frequency wave-based technologies include: [55]
Multi-touch touchscreen gestures enable predefined motions to interact with the device and software. An increasing number of devices like smartphones, tablet computers, laptops or desktop computers have functions that are triggered by multi-touch gestures.
Years before it was a viable consumer product, popular culture portrayed potential uses of multi-touch technology in the future, including in several installments of the Star Trek franchise.
In the 1982 Disney sci-fi film Tron a device similar to the Microsoft Surface was shown. It took up an executive's entire desk and was used to communicate with the Master Control computer.
In the 2002 film Minority Report , Tom Cruise uses a set of gloves that resemble a multi-touch interface to browse through information. [57]
In the 2005 film The Island , another form of a multi-touch computer was seen where the professor, played by Sean Bean, has a multi-touch desktop to organize files, based on an early version of Microsoft Surface (not be confused with the tablet computers which now bear that name).
In 2007, the television series CSI: Miami introduced both surface and wall multi-touch displays in its sixth season.
Multi-touch technology can be seen in the 2008 James Bond film Quantum of Solace , where MI6 uses a touch interface to browse information about the criminal Dominic Greene. [58]
In the 2008 film The Day the Earth Stood Still , Microsoft's Surface was used. [59]
The television series NCIS: Los Angeles , which premiered 2009, makes use of multi-touch surfaces and wall panels as an initiative to go digital.
In a 2008, an episode of the television series The Simpsons , Lisa Simpson travels to the underwater headquarters of Mapple to visit Steve Mobbs, who is shown to be performing multiple multi-touch hand gestures on a large touch wall.
In the 2009, the film District 9 the interface used to control the alien ship features similar technology. [60]
10/GUI is a proposed new user interface paradigm. Created in 2009 by R. Clayton Miller, it combines multi-touch input with a new windowing manager.
It splits the touch surface away from the screen, so that user fatigue is reduced and the users' hands don't obstruct the display. [61] Instead of placing windows all over the screen, the windowing manager, Con10uum, uses a linear paradigm, with multi-touch used to navigate between and arrange the windows. [62] An area at the right side of the touch screen brings up a global context menu, and a similar strip at the left side brings up application-specific menus.
An open source community preview of the Con10uum window manager was made available in November, 2009. [63]
In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.
A pointing device is a human interface device that allows a user to input spatial data to a computer. Graphical user interfaces (GUI) and CAD systems allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.
A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to 2D motion, to control a pointer in a graphical user interface on a computer screen. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available, as detached accessories.
Cirque Corporation is an American company which developed and commercialized the first successful capacitive touchpad, now widely used in notebook computers. Cirque develops and sells a variety of touch input products, both in original equipment manufacturer and end-user retail form. Cirque was founded in 1991 by George E. Gerpheide, PhD, and James L. O'Callaghan, to commercialize the GlidePoint technology invented in the 1980s by Gerpheide.
A touchscreen is a type of display that can detect touch input from a user. It consists of both an input device and an output device. The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices.
Synaptics Incorporated is a publicly traded San Jose, California-based developer of human interface (HMI) hardware and software, including touchpads for computer laptops; touch, display driver, and fingerprint biometrics technology for smartphones; and touch, video and far-field voice technology for smart home devices and automobiles. Synaptics sells its products to original equipment manufacturers (OEMs) and display manufacturers.
In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input.
A virtual keyboard is a software component that allows the input of characters without the need for physical keys. Interaction with a virtual keyboard happens mostly via a touchscreen interface, but can also take place in a different form when in virtual or augmented reality.
Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.
Microsoft PixelSense was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).
A text entry interface or text entry device is an interface that is used to enter text information in an electronic device. A commonly used device is a mechanical computer keyboard. Most laptop computers have an integrated mechanical keyboard, and desktop computers are usually operated primarily using a keyboard and mouse. Devices such as smartphones and tablets mean that interfaces such as virtual keyboards and voice recognition are becoming more popular as text entry systems.
A resistive touchscreen is a type of touch-sensitive display that works by detecting pressure applied to the screen. It is composed of two flexible sheets coated with a resistive material and separated by an air gap or microdots.
FingerWorks was a gesture recognition company based in the United States, known mainly for its TouchStream multi-touch keyboard. Founded by John Elias and Wayne Westerman of the University of Delaware in 1998, it produced a line of multi-touch products including the iGesture Pad and the TouchStream keyboard, which were particularly helpful for people suffering from RSI and other medical conditions. The keyboards became the basis for the iPhone's touchscreen when the company's assets were acquired by Apple Inc. in early 2005.
In electrical engineering, capacitive sensing is a technology, based on capacitive coupling, that can detect and measure anything that is conductive or has a dielectric constant different from air. Many types of sensors use capacitive sensing, including sensors to detect and measure proximity, pressure, position and displacement, force, humidity, fluid level, and acceleration. Human interface devices based on capacitive sensing, such as touchpads, can replace the computer mouse. Digital audio players, mobile phones, and tablet computers will sometimes use capacitive sensing touchscreens as input devices. Capacitive sensors can also replace mechanical buttons.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.
In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, computer mice, scanners, cameras, joysticks, and microphones.
In computing, a stylus is a small pen-shaped instrument whose tip position on a computer monitor can be detected. It is used to draw, or make selections by tapping. While devices with touchscreens such as laptops, smartphones, game consoles, and graphics tablets can usually be operated with a fingertip, a stylus can provide more accurate and controllable input.
Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to hardware specifications, devised by Microsoft, and announced in 2001 for a pen-enabled personal computer and running a licensed copy of the Windows XP Tablet PC Edition operating system or a derivative thereof.
Force Touch is a haptic pressure-sensing technology developed by Apple Inc. that enables trackpads and touchscreens to sense the amount of force being applied to their surfaces. Software that uses Force Touch can distinguish between various levels of force for user interaction purposes. Force Touch was first unveiled on September 9, 2014, during the introduction of Apple Watch. Starting with the Apple Watch, Force Touch has been incorporated into many Apple products, including MacBooks and the Magic Trackpad 2.
Daniel Wigdor is a Canadian computer scientist, entrepreneur, investor, expert witness and author. He is the associate chair of Industrial Relations as well as a professor in the Department of Computer Science at the University of Toronto.
{{cite book}}
: CS1 maint: multiple names: authors list (link)And we have invented a new technology called Multi-touch