List of graphical user interface elements

Last updated

Graphical user interface elements are those elements used by graphical user interfaces (GUIs) to offer a consistent visual language to represent information stored in computers. These make it easier for people with few computer skills to work with and use computer software.

Contents

This article explains the most common elements of visual language interfaces found in the WIMP ("window, icon, menu, pointer") paradigm, although many are also used at other graphical post-WIMP interfaces. These elements are usually embodied in an interface using a widget toolkit or desktop environment.

Structural elements

Graphical user interfaces use visual conventions to represent the generic information shown. Some conventions are used to build the structure of the static elements on which the user can interact, and define the appearance of the interface.

Window

A window is an area on the screen that displays information, with its contents being displayed independently from the rest of the screen. An example of a window is what appears on the screen when the "My Documents" icon is clicked in the Windows Operating System. It is easy for a user to manipulate a window: it can be shown and hidden by clicking on an icon or application, and it can be moved to any area by dragging it (that is, by clicking in a certain area of the window – usually the title bar along the top – and keeping the pointing device's button pressed, then moving the pointing device). A window can be placed in front or behind another window, its size can be adjusted, and scrollbars can be used to navigate the sections within it. Multiple windows can also be open at one time, in which case each window can display a different application or file – this is very useful when working in a multitasking environment. The system memory is the only limitation to the number of windows that can be open at once. There are also many types of specialized windows. [1]

Menus allow the user to execute commands by selecting from a list of choices. Options are selected with a mouse or other pointing device within a GUI. A keyboard may also be used. Menus are convenient because they show what commands are available within the software. This limits the amount of documentation the user reads to understand the software. [2]

Icons

An icon is a small picture that represents objects such as a file, program, web page, or command. They are a quick way to execute commands, open documents, and run programs. Icons are also very useful when searching for an object in a browser list, because in many operating systems all documents using the same extension will have the same icon.

Controls (or widgets)

Interface elements known as graphical control elements , controls or widgets are software components that a computer user interacts with through direct manipulation to read or edit information about an application. Each widget facilitates a specific user-computer interaction. Structuring a user interface with Widget toolkits allow developers to reuse code for similar tasks, and provides users with a common language for interaction, maintaining consistency throughout the whole information system.

Common uses for widgets involve the display of collections of related items (such as with various list and canvas controls), initiation of actions and processes within the interface (buttons and menus), navigation within the space of the information system (links, tabs and scrollbars), and representing and manipulating data values (such as labels, check boxes, radio buttons, sliders, and spinners.)

Tabs

A tab is typically a rectangular small box which usually contains a text label or graphical icon associated with a view pane. When activated the view pane, or window, displays widgets associated with that tab; groups of tabs allow the user to switch quickly between different widgets. This is used in all modern web browsers. [6] [7] [8] [9] [10] With these browsers, you can have multiple web pages open at once in one window, and quickly navigate between them by clicking on the tabs associated with the pages. Tabs are usually placed in groups at the top of a window, but may also be grouped on the side or bottom of a window. Tabs are also present in the settings panes of many applications. Microsoft Windows, for example, uses tabs in most of its control panel dialogues.

Interaction elements

Some common idioms for interaction have evolved in the visual language used in GUIs. Interaction elements are interface objects that represent the state of an ongoing operation or transformation, either as visual remainders of the user intent (such as the pointer), or as affordances showing places where the user may interact.

Cursor

A cursor is an indicator used to show the position on a computer monitor or other display device that will respond to input from a text input or pointing device.

Pointer

The pointer echoes movements of the pointing device, commonly a mouse or touchpad. The pointer is the place where actions take place that are initiated through direct manipulation gestures such as click, touch and drag.

Insertion point

The caret, text cursor or insertion point represents the point of the user interface where the focus is located. It represents the object that will be used as the default subject of user-initiated commands such as writing text, starting a selection or a copy-paste operation through the keyboard.

Selection

A selection is a list of items on which user operations will take place. The user typically adds items to the list manually, although the computer may create a selection automatically.

Adjustment handle

A handle is an indicator of a starting point for a drag and drop operation. Usually the pointer shape changes when placed on the handle, showing an icon that represents the supported drag operation.

See also

Related Research Articles

Context menu User interface element

A context menu is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right-click mouse operation. A context menu offers a limited set of choices that are available in the current state, or context, of the operating system or application to which the menu belongs. Usually the available choices are actions related to the selected object. From a technical point of view, such a context menu is a graphical control element.

Graphical user interface User interface allowing interaction through graphical icons and visual indicators

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

History of the graphical user interface

The history of the graphical user interface, understood as the use of graphic icons and a pointing device to control a computer, covers a five-decade span of incremental refinements, built on some constant core principles. Several vendors have created their own windowing systems based on independent code, but with basic elements in common that define the WIMP "window, icon, menu and pointing device" paradigm.

Pointing device

A pointing device is an input interface that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

Pie menu Software menu where elements are arranged in a circle

In user interface design, a pie menu is a circular context menu where selection depends on direction. It is a graphical control element. A pie menu is made of several "pie slices" around an inactive center and works best with stylus input, and well with a mouse. Pie slices are drawn with a hole in the middle for an easy way to exit the menu.

In computing, a window is a graphical control element. It consists of a visual area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a rectangular shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes.

In computing, an icon is a pictogram or ideogram displayed on a computer screen in order to help the user navigate a computer system. The icon itself is a quickly comprehensible symbol of a software tool, function, or a data file, accessible on the system and is more like a traffic sign than a detailed illustration of the actual entity it represents. It can serve as an electronic hyperlink or file shortcut to access the program or data. The user can activate an icon using a mouse, pointer, finger, or recently voice commands. Their placement on the screen, also in relation to other icons, may provide further information to the user about their usage. In activating an icon, the user can move directly into and out of the identified function without knowing anything further about the location or requirements of the file or code.

Menu (computing) List of options or commands within a computer program

In computing and telecommunications, a menu is a list of options or commands presented to the user of a computer or communications system. A menu may either be a system's entire user interface, or only part of a more complex one.

Drag and drop

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

Point and click are the actions of a computer user moving a pointer to a certain location on a screen (pointing) and then pressing a button on a mouse, usually the left button (click), or other pointing device. An example of point and click is in hypermedia, where users click on hyperlinks to navigate from document to document.

A taskbar is an element of a graphical user interface which has various purposes. It typically shows which programs are currently running.

Text-based user interface Type of interface based on outputting to or controlling a text display

In computing, text-based user interfaces (TUI), is a retronym describing a type of user interface (UI) common as an early form of human–computer interaction, before the advent of graphical user interfaces (GUIs). Like GUIs, they may use the entire screen area and accept mouse and other inputs. They may also use color and often structure the display using special graphical characters such as ┌ and ╣, referred to in Unicode as the "box drawing" set. The modern context of use is usually a terminal emulator.

WIMP (computing)

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

Graphical widget Element of interaction in a graphical user interface

A graphical widget in a graphical user interface is an element of interaction, such as a button or a scroll bar. Controls are software components that a computer user interacts with through direct manipulation to read or edit information about an application. User interface libraries such as Windows Presentation Foundation, GTK, and Cocoa, contain a collection of controls and the logic to render these.

Window manager

A window manager is system software that controls the placement and appearance of windows within a windowing system in a graphical user interface. Most window managers are designed to help provide a desktop environment. They work in conjunction with the underlying graphical system that provides required functionality—support for graphics hardware, pointing devices, and a keyboard, and are often written and created using a widget toolkit.

The Windows shell is the graphical user interface for the Microsoft Windows operating system. Its readily identifiable elements consists of the desktop, the taskbar, the Start menu, the task switcher and the AutoPlay feature. On some versions of Windows, it also includes Flip 3D and the charms. In Windows 10, the Windows Shell Experience Host interface drives visuals like the Start Menu, Action Center, Taskbar, and Task View/Timeline. However, the Windows shell also implements a shell namespace that enables computer programs running on Windows to access the computer's resources via the hierarchy of shell objects. "Desktop" is the top object of the hierarchy; below it there are a number of files and folders stored on the disk, as well as a number of special folders whose contents are either virtual or dynamically created. Recycle Bin, Libraries, Control Panel, This PC and Network are examples of such shell objects.

Workbench (AmigaOS) Graphical user interface for the Amiga computer

Workbench is the graphical file manager of AmigaOS developed by Commodore International for their Amiga line of computers. Workbench provides the user with a graphical interface to work with file systems and launch applications. It uses a workbench metaphor for representing file system organisation.

Mouse button

A mouse button is an electric switch on a computer mouse which can be pressed (“clicked”) to select or interact with an element of a graphical user interface.

A software widget is a relatively simple and easy-to-use software application or component made for one or more different software platforms.

Pointer (user interface)

In computing, a pointer or mouse cursor is a symbol or graphical image on the computer monitor or other display device that echoes movements of the pointing device, commonly a mouse, touchpad, or stylus pen. It signals the point where actions of the user take place. It can be used in text-based or graphical user interfaces to select and move other elements. It is distinct from the cursor, which responds to keyboard input. The cursor may also be repositioned using the pointer.

References

  1. "Window Definition". Linfo.org. 9 August 2004. Retrieved 19 September 2006.
  2. "menu". Foldoc.org. Retrieved 19 September 2006.
  3. 1 2 "How to Use Menus". Java.sun.com. Retrieved 19 September 2006.
  4. "context-sensitive menu". Foldoc.org. Retrieved 19 September 2006.
  5. "pull-down menu". Foldoc.org. 19 September 2006.
  6. "Use tabs in Chrome". Google Chrome Help. Google. Archived from the original on 14 December 2020. Retrieved 14 December 2020.
  7. mozilla.org contributors. "Use tabs to organize lots of websites in a single window". Mozilla Support. Mozilla Corporation. Archived from the original on 14 December 2020. Retrieved 14 December 2020.
  8. "Use tabs for webpages in Safari on Mac". Safari User Guide. Apple Inc. Archived from the original on 14 December 2020. Retrieved 14 December 2020.
  9. Hoffman, Chris (28 October 2020). "How to Enable and Use Vertical Tabs in Microsoft Edge". How-To Geek . LifeSavvy Media. Archived from the original on 14 December 2020. Retrieved 14 December 2020.
  10. "7 Opera features that make it easy to manage many tabs". Opera Blogs. Opera Software. 18 November 2015. Archived from the original on 14 December 2020. Retrieved 14 December 2020.