Point and click

Last updated

Point and click are the actions of a computer user moving a pointer to a certain location on a screen (pointing) and then pressing a button on a mouse, usually the left button (click), or other pointing device. An example of point and click is in hypermedia, where users click on hyperlinks to navigate from document to document.

User (computing) person who uses a computer or network service

A user is a person who utilizes a computer or network service. Users of computer systems and software products generally lack the technical expertise required to fully understand how they work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.

Computer mouse pointing device

A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of a pointer on a display, which allows a smooth control of the graphical user interface. The first public demonstration of a mouse controlling a computer system was in 1968. Originally wired to a computer, many modern mice are cordless, relying on short-range radio communication with the connected system. Mice originally used a ball rolling on a surface to detect motion, but modern mice often have optical sensors that have no moving parts. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and "wheels", which enable additional control and dimensional input.

Pointing device input device

A pointing device is an input interface that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

Contents

Point and click can be used with any number of input devices varying from mouses, touch pads, trackpoint, joysticks, scroll buttons, and roller balls.

Input device peripheral to provide data and signals to an information processing system

In computing, an input device is a piece of computer hardware equipment used to provide data and control signals to an information processing system such as a computer or information appliance. Examples of input devices include keyboards, mouse, scanners, digital cameras and joysticks. Audio input devices may be used for purposes including speech recognition. Many companies are utilizing speech recognition to help assist users to use their device(s).

Joystick input device consisting of a stick that pivots on a base

A joystick is an input device consisting of a stick that pivots on a base and reports its angle or direction to the device it is controlling. A joystick, also known as the control column, is the principal control device in the cockpit of many civilian and military aircraft, either as a center stick or side-stick. It often has supplementary switches to control various aspects of the aircraft's flight.

Trackball pointing device

A trackball is a pointing device consisting of a ball held by a socket containing sensors to detect a rotation of the ball about two axes—like an upside-down mouse with an exposed protruding ball. The user rolls the ball to position the on-screen pointer, using their thumb, fingers, or commonly the palm of the hand while using the fingertips to press the mouse buttons.

User interfaces, for example graphical user interfaces, are sometimes described as "point-and-click interfaces", often to suggest that they are very easy to use, requiring that the user simply point to indicate their wishes. These interfaces are sometimes referred to condescendingly (e.g., by Unix users) as "click-and-drool" or "point-and-drool" interfaces. [1] [2]

User interface means by which a user interacts with and controls a machine

The user interface (UI), in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology.

Graphical user interface user interface allowing interaction through graphical icons and visual indicators

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

Unix family of computer operating systems that derive from the original AT&T Unix

Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.

The use of this phrase to describe software implies that the interface can be controlled solely through the mouse (or some other means such as a stylus), with little or no input from the keyboard, as with many graphical user interfaces.

Software non-tangible executable component of a computer

Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.

Stylus (computing) pen used for computers

In computing, a stylus is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet. With touchscreen devices, a user places a stylus on the surface of the screen to draw or make selections by tapping the stylus on the screen. In this manner, the stylus can be used instead of a mouse or trackpad as a pointing device, a technique commonly called pen computing.

Computer keyboard device comprising an arrangement of buttons or keys used to input text in computers

In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input method for computers.

Hovering and tooltips

A web browser tooltip displayed for a hyperlink. HTML tooltip.png
A web browser tooltip displayed for a hyperlink.

In some systems, such as Internet Explorer, moving the pointer over a link (or other GUI control) and waiting for a split-second (that can range from 0.004 to 0.7 s) can cause a tooltip to be displayed. [3]

Internet Explorer web browser developed by Microsoft

Internet Explorer was a series of graphical web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995. It was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in service packs, and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. The browser is discontinued, but still maintained.

The tooltip or infotip or a hint is a common graphical user interface element. It is used in conjunction with a cursor, usually a pointer. The user hovers the pointer over an item, without clicking it, and a tooltip may appear—a small "hover box" with information about the item being hovered over. Tooltips do not usually appear on mobile operating systems, because there is no cursor.

Single click

A single click or click is the act of pressing a computer mouse button once without moving the mouse. Single clicking is usually a primary action of the mouse. Single clicking, by default in many operating systems, selects (or highlights) an object while double-clicking executes or opens the object. The single click has many advantages over double click due the reduced time needed to complete the action. The single-click or one-click phrase has also been used to apply to the commercial field as a competitive advantage. The slogan "single click" or "one click" has become very common to show clients the ease of use of their services.

A double-click is the act of pressing a computer mouse button twice quickly without moving the mouse. Double-clicking allows two different actions to be associated with the same mouse button. It was developed by Bill Atkinson of Apple Computer for their Lisa project. Often, single-clicking selects an object, while a double-click executes the function associated with that object. Following a link in a web browser is accomplished with only a single click, requiring the use of a second mouse button, "click and hold" delay, or modifier key to gain access to actions other than following the link. On touchscreens, the double-click is called "double-tap"; it's not used as much as double-click, but typically it functions as a zoom feature.

On icons

By default on most computer systems, for a person to select a certain software function, he or she will have to click on the left button. An example of this can be a person clicking on an icon. Similarly, clicking on the right button will present the user with a text menu to select more actions. These actions can range from open, explore, properties, etc. In terms of entertainment software, point-and-click interfaces are common input methods, usually offering a 'menu' or 'icon bar' interface that functions in the expected manner. In other games, the character explores different areas within the game world. To move to another area, the player will move the cursor to one point of the screen, where the cursor will turn into an arrow. Clicking will then move the player to that area.

On text

In many text processing programs, such as web browsers or word processors, clicking on text moves the cursor to that location. Clicking and holding the left button will allow users to highlight the selected text enabling the user with more options to edit or use the text.

Double click

Double click is most commonly used with a computer mouse when the pointer is placed over an icon or object and the button is quickly pressed twice. This action, when performed without moving the location of the mouse, will produce a double click.

Context clicks

Right click

Fitts's Law

Fitts's law can be used to quantify the time required to perform a point-and-click action.

where:

See also

Related Research Articles

Context menu user interface element

A context menu is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right-click mouse operation. A context menu offers a limited set of choices that are available in the current state, or context, of the operating system or application to which the menu belongs. Usually the available choices are actions related to the selected object. From a technical point of view, such a context menu is a graphical control element.

Scrollbar user interface element

A scrollbar is an interaction technique or widget in which continuous text, pictures, or any other content can be scrolled in a predetermined direction on a computer display, window, or viewport so that all of the content can be viewed, even if only a fraction of the content can be seen on a device's screen at one time. It offers a solution to the problem of navigation to a known or unknown location within a two-dimensional information space. It was also known as a handle in the very first GUIs. They are present in a wide range of electronic devices including computers, graphing calculators, mobile phones, and portable media players. The user interacts with the scrollbar elements using some method of direct action, the scrollbar translates that action into scrolling commands, and the user receives feedback through a visual updating of both the scrollbar elements and the scrolled content.

Pointing stick Isometric joystick typically mounted in a keyboard

A pointing stick is a small joystick used as a pointing device typically mounted centrally in a computer keyboard. Like other pointing devices such as mice, touchpads or trackballs, operating system software translates manipulation of the device into movements of the pointer or cursor on the monitor. Unlike other pointing devices, it reacts to force or strain rather than to gross movement, so it is called an "isometric" pointing device. IBM introduced it commercially in 1992 on its laptops, under the name "TrackPoint".

Menu (computing) overview of options within a computer program

In computing and telecommunications, a menu is a list of options or commands presented to the user of a computer or communications system. A menu may either be a system's entire user interface, or only part of a more complex one.

Screen magnifier

A screen magnifier is software that interfaces with a computer's graphical output to present enlarged screen content. By enlarging part of a screen, people with visual impairments can better see words and images. This type of assistive technology is useful for people with some functional vision; people with visual impairments and little or no functional vision usually use a screen reader.

Drag and drop action in computer graphic user interfaces

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

In computing, the focus indicates the component of the graphical user interface which is selected to receive input. Text entered at the keyboard or pasted from a clipboard is sent to the component which has the focus. Moving the focus away from a specific user interface element is known as a blur event in relation to this element. Typically, the focus is withdrawn from an element by giving another element the focus. This means that focus and blur events typically both occur virtually simultaneously, but in relation to different user interface elements, one that gets the focus and one that gets blurred.

WIMP (computing) style of human-computer interaction

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. It was coined by Merzouga Wilberts in 1980. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

Widget (GUI) element of a graphical user interface (GUI)

A control element in a graphical user interface is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, GTK+, and Cocoa, contain a collection of controls and the logic to render these.

In computer user interfaces, a cursor is an indicator used to show the current position for user interaction on a computer monitor or other display device that will respond to input from a text input or pointing device. The mouse cursor is also called a pointer, owing to its resemblance in usage to a pointing stick.

Crossing-based interfaces are graphical user interfaces that use crossing gestures instead of, or in complement to, pointing.

A triple-click is the action of clicking a computer mouse button three times quickly without moving the mouse. Along with clicking and double-clicking, triple-clicking allows three different actions to be associated with the same mouse button. Criticism of the double-click mechanism is even more valid for triple-clicks. However, few applications assign critical actions to a triple click.

Mouse button microswitch on a computer mouse

A mouse button is a microswitch on a computer mouse which can be pressed (“clicked”) to select or interact with an element of a graphical user interface.

A context-sensitive user interface is one which can automatically choose from a multiplicity of options based on the current or previous state(s) of the program operation. Context sensitivity is almost ubiquitous in current graphical user interfaces, usually in the form of context menus. Context sensitivity, when operating correctly, should be practically transparent to the user. This can be experienced in computer operating systems which call a compatible program to run files based upon their filename extension, e.g. opening text files with a word processor, video files with a video player, image files with a photo viewer or running program files themselves, and their shortcuts, when selected.

Pointer (user interface) graphical image on a computer monitor that echoes movements of a pointing device

In computing, a pointer or mouse cursor is a symbol or graphical image on the computer monitor or other display device that echoes movements of the pointing device, commonly a mouse, touchpad, or stylus pen. It signals the point where actions of the user take place. It can be used in text-based or graphical user interfaces to select and move other elements. It is distinct from the cursor, which responds to keyboard input. The cursor may also be repositioned using the pointer.

References

  1. "Jargon File entry: point-and-drool interface".
  2. Josh Marinacci. "Point, Click, and Drool!". weblogs.java.net.
  3. Guy Hart-Davis (2007), Mastering Microsoft Windows Vista home: premium and basic, John Wiley and Sons, p. 180, ISBN   978-0-470-04614-2 , retrieved 2010-08-08