Pointing device gesture

Last updated
The mouse gesture for "back" in Opera - the user holds down the right mouse button, moves the mouse left, and releases the right mouse button. Opera back mouse gesture.svg
The mouse gesture for "back" in Opera – the user holds down the right mouse button, moves the mouse left, and releases the right mouse button.

In computing, a pointing device gesture or mouse gesture (or simply gesture) is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.

Contents

History

The first [1] pointing device gesture, the "drag", was introduced by Apple to replace a dedicated "move" button on mice shipped with its Macintosh and Lisa computers. Dragging involves holding down a pointing device button while moving the pointing device; the software interprets this as an action distinct from separate clicking and moving behaviors. Unlike most pointing device gestures, it does not involve the tracing of any particular shape. Although the "drag" behavior has been adopted in a huge variety of software packages, few other gestures have been as successful.

Current use

As of 2005, most programs do not support gestures other than the drag operation. Each program that recognizes pointing device gestures does so in its own way, sometimes allowing for very short mouse movement distances to be recognized as gestures, and sometimes requiring very precise emulation of a certain movement pattern (e.g. circle). Some implementations allow users to customize these factors.

Some video games have used gestures. For example, in the Myth real-time tactics series, originally created by Bungie, players use them to order battlefield units to face in a desired direction. Another game using gestures is Lionhead's Black & White . The game Arx Fatalis uses mouse gestures for drawing runes in the air to cast spells. Several Nintendo Wii games take advantage of such a system. Ōkami uses a system similar to mouse gestures; the player can enter a drawing mode in which the shape they create (circle, lightning bolt, line, etc.) performs a function in the game such as creating a bomb or changing the time from night to day. Other examples of computer games that use mouse gestures are Die by the Sword and Silver where basic mouse gestures actually map attack moves and such in real-time combat, along with MX vs. ATV: Reflex , which has a control scheme that implements its titular rider "reflex" system with mouse gestures. [2]

The Opera web browser has recognized gestures since version 5.10 (April 2001) but this feature was disabled by default. Opera browser also supports mouse chording which serves a similar function but doesn't necessitate mouse movement. The first browser that used advanced mouse gestures (in 2002) was Maxthon, in which a highly customizable interface allowed the assignment of almost every action to one of 52 mouse gestures and few mouse chords. Several mouse gesture extensions are also available for the Mozilla Firefox browser. These extensions use almost identical gestures as Opera.

Some tools provide mouse gestures support in any application for Microsoft Windows. K Desktop Environment 3 includes universal mouse gesture support since version 3.2.

Windows Aero provides three mouse gestures called Aero Peek, Aero Shake and Aero Snap. See the corresponding article for a description.

Touchpad and touchscreen gestures

Touchscreens of tablet-type devices, such as the iPad, utilize multi-touch technology, with gestures acting as the main form of user interface. Many touchpads, which in laptops replace the traditional mouse, have similar gesture support. For example, a common gesture is to use two fingers in a downwards or upwards motion to scroll the currently active page. The rising popularity of touchscreen interfaces has led to gestures becoming a more standard feature in computing. Windows 7 introduced touchscreen support and touchpad gestures. [3] Its successor, Windows 8 is designed to run both on traditional desktops and mobile devices and hence gestures are now enabled by default where the hardware allows it.[ citation needed ]

Related to gestures are touchpad hotspots, where a particular region of the touchpad has additional functionality. For example, a common hotspot feature is the far right side of the touchpad, which will scroll the active page if a finger is dragged down or up it.

Multi-touch touchscreen gestures are predefined motions used to interact with multi-touch devices. An increasing number of products like smartphones, tablets, laptops or desktop computers have functions that are triggered by multi-touch gestures. Common touchscreen gestures include:

Tap
Gestures Tap.png
Double Tap
Gestures Double Tap.png
Long Press
Gestures Long Press.png
Scroll
Gestures Scroll.png
Pan
Gestures Pan.png
Flick
Gestures Flick.png
Two Finger Tap
Gestures Two Finger Tap.png
Two Finger Scroll
Gestures Two Finger Scroll.png
Pinch
Gestures Pinch.png
Zoom
Gestures Unpinch.png
Rotate
Gestures Rotate.png

Other gestures including more than 2 fingers on screen have also been developed such as Sticky Tools. [4] These techniques are often developed for 3D applications and are not considered standard.

Drawbacks

A major drawback of current gesture interaction solutions is the lack of support for two necessary user interface design principles, feedback and visibility (or affordance). Feedback notification is required to indicate whether the gesture has been entered correctly by indicating the gesture recognized and the corresponding command activated, although Sensiva does approach this to some extent in providing voice notification. The other principle is visibility of gestures, providing the user some means of learning the necessary gestures and the contexts they can be used in. Both Mouse Gestures for Internet Explorer and ALToolbar Mouse Gestures display colored tracers that indicate the current motion that the user is taking to facilitate visual clues for the user. Pie menus and marking menus have been proposed as solutions for both problems, since they support learning of the available options but can also be used with quick gestures. Most recent versions of Opera (11 and above) uses an on-screen pie menu to simply and instructively display which mouse gestures are available and how to activate them, providing feedback and visibility. [5]

One limitation with gesture interaction is the scope context in which the gestures can be used. For example, each gesture has only one corresponding command for each application window.

Holding down buttons while moving the mouse can be awkward and requires some practice, since the downwards action increases friction for the horizontal motion. An optical mouse would be less susceptible to changes in behavior than a ball mouse with increased friction because the sensor does not rely on mechanical contact to sense movement; a touchpad provides no added friction with all its buttons held down with a thumb. However, it was also argued that muscular tension resulting from holding down buttons could be exploited in user interface design as it gives constant feedback that the user is in a temporary state, or mode (Buxton, 1995).

See also

Related Research Articles

<span class="mw-page-title-main">Computer mouse</span> Pointing device used to control a computer

A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of a pointer on a display, which allows a smooth control of the graphical user interface of a computer.

The graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

<span class="mw-page-title-main">Pointing device</span> Human interface device for computers

A pointing device is a human interface device that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

<span class="mw-page-title-main">User interface</span> Means by which a user interacts with and controls a machine

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

<span class="mw-page-title-main">Scrollbar</span> Graphical user interface element

A scrollbar is an interaction technique or widget in which continuous text, pictures, or any other content can be scrolled in a predetermined direction on a computer display, window, or viewport so that all of the content can be viewed, even if only a fraction of the content can be seen on a device's screen at one time. It offers a solution to the problem of navigation to a known or unknown location within a two-dimensional information space. It was also known as a handle in the very first GUIs. They are present in a wide range of electronic devices including computers, graphing calculators, mobile phones, and portable media players. The user interacts with the scrollbar elements using some method of direct action, the scrollbar translates that action into scrolling commands, and the user receives feedback through a visual updating of both the scrollbar elements and the scrolled content.

<span class="mw-page-title-main">Touchpad</span> Type of pointing device

A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to a position on a screen, to control a pointer in a graphical user interface. Touchpads are common on laptop computers, as contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and on some portable media players. Wireless touchpads are also available, as detached accessories.

<span class="mw-page-title-main">Drag and drop</span> Action in computer graphic user interfaces

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

<span class="mw-page-title-main">Touchscreen</span> Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an electronic device.

<span class="mw-page-title-main">Scroll wheel</span> The component of a computer mouse used for scrolling

A scroll wheel is a wheel used for scrolling. The term usually refers to such wheels found on computer mice. It is often made of hard plastic with a rubbery surface, centred around an internal rotary encoder. It is usually located between the left and right mouse buttons and is positioned perpendicular to the mouse surface. Sometimes the wheel can be pressed left and right, which is actually just two additional macros buttons.

In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">University of Maryland Human–Computer Interaction Lab</span> Research lab at the University of Maryland, College Park

The Human–Computer Interaction Lab (HCIL) at the University of Maryland, College Park is an academic research center specializing in the field of human-computer interaction (HCI). Founded in 1983 by Ben Shneiderman, it is one of the oldest HCI labs of its kind. The HCIL conducts research on the design, implementation, and evaluation of computer interface technologies. Additional research focuses on the development of user interfaces and design methods. Primary activities of the HCIL include collaborative research, publication and the sponsorship of open houses, workshops and annual symposiums.

<span class="mw-page-title-main">Pen computing</span> Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

A text entry interface or text entry device is an interface that is used to enter text information in an electronic device. A commonly used device is a mechanical computer keyboard. Most laptop computers have an integrated mechanical keyboard, and desktop computers are usually operated primarily using a keyboard and mouse. Devices such as smartphones and tablets mean that interfaces such as virtual keyboards and voice recognition are becoming more popular as text entry systems.

<span class="mw-page-title-main">Interaction technique</span>

An interaction technique, user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example, one can go back to the previously visited page on a Web browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech command. It is a widely used term in human-computer interaction. In particular, the term "new interaction technique" is frequently used to introduce a novel user interface design idea.

Hands-on computing is a branch of human-computer interaction research which focuses on computer interfaces that respond to human touch or expression, allowing the machine and the user to interact physically. Hands-on computing can make complicated computer tasks more natural to users by attempting to respond to motions and interactions that are natural to human behavior. Thus hands-on computing is a component of user-centered design, focusing on how users physically respond to virtual environments.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

<span class="mw-page-title-main">Input device</span> Device that provides data and signals to a computer

In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, mouse, scanners, cameras, joysticks, and microphones.

<span class="mw-page-title-main">Microsoft Tablet PC</span> Microsoft

Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to a set of specifications announced in 2001 by Microsoft, for a pen-enabled personal computer, conforming to hardware specifications devised by Microsoft and running a licensed copy of Windows XP Tablet PC Edition operating system or a derivative thereof.

References

  1. "A Quick History of Drag and Drop – A GoPhore Article". 365Trucking.com. Archived from the original on 2019-07-02. Retrieved 2019-07-02.
  2. "MX vs. ATV: Reflex PC UK Manual" (PDF). p. 3. Retrieved 13 February 2022.
  3. "Windows 7 Hardware: Touch Finally Arrives". 2009-09-28.
  4. Hancock, Mark; ten Cate, Thomas; Carpendale, Sheelagh (2009). "Sticky tools". Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces - ITS '09. New York, New York, USA: ACM Press. p. 133. doi:10.1145/1731903.1731930. ISBN   978-1-60558-733-2.
  5. "Opera Tutorials - Gestures" . Retrieved 3 August 2012.