Touch user interface

Last updated

A touch user interface (TUI) is a computer-pointing technology based upon the sense of touch (haptics). Whereas a graphical user interface (GUI) relies upon the sense of sight, a TUI enables not only the sense of touch to innervate and activate computer-based functions, it also allows the user, particularly those with visual impairments, an added level of interaction based upon tactile or Braille input.

Contents

Technology

Generally, the TUI requires pressure or presence with a switch located outside of the printed paper. Not to be confused with electronic paper endeavors, the TUI requires the printed pages to act as a template or overlay to a switch array. By interacting with the switch through touch or presence, an action is innervated. The switching sensor cross-references with a database. The database retains the correct pathway to retrieve the associated digital content or launch the appropriate application.

TUI icons may be used to indicate to the reader of the printed page what action will occur upon interacting with a particular position on the printed page.

Turning pages and interacting with new pages that may have the same touch points as previous or subsequent pages, a z-axis may be used to indicate the plane of activity. Z-axis can be offset around the boundary of the page. When the unique z-axis is interacted with, x,y-axis can have identical touch points as other pages. For example, 1,1,1 indicates a z-axis of 1 (page 1) and the x,y-axis is 1,1. However, turning the page and pressing a new z-axis, say page 2, and then the same x,y-axis content position as page 1, gains the following coordinate structure: 2,1,1.

An integrated circuit (IC) is located either within the printed material or within an enclosure that cradles the printed material. This IC receives a signal when a switch is innervated. The firmware located within the IC communicates via Universal Serial Bus (USB) either connected to a cable, or using a wireless protocol adapter to a reference database that can reside on media within a computer or appliance. Upon receipt of the coordinate structure from the firmware, the database correlates the position with a pre-determined link or pathway to digital content or execution command for an application. After correlating the link with the pathway, a signal is sent to retrieve and render the terminal of the path.

Educational mandate

In the United States, legislation took effect in December 2006, that requires educational publishers in the K-12 education industry to provide a National Instructional Materials Accessibility Standard (NIMAS). In essence, educational publishers must provide an inclusive experience to those students who are blind. If they are unable to provide this experience, they are required to provide the digital content source files to a clearing house that will convert the materials into an accessible experience for the student. The TUI has the promise of enabling the publishers to maintain control of their content while providing an inclusive, tactile, or Braille experience to students who are visually impaired. Further, using a Braille approach may serve to help enhance Braille literacy while meeting the mandates of NIMAS.

See also

Related Research Articles

<span class="mw-page-title-main">Pointing device gesture</span>

In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.

<span class="mw-page-title-main">Pointing device</span> Human interface device for computers

A pointing device is a human interface device that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

<span class="mw-page-title-main">User interface</span> Means by which a user interacts with and controls a machine

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

<span class="mw-page-title-main">Touchpad</span> Type of pointing device

A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to a position on a screen, to control a pointer in a graphical user interface. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available, as detached accessories.

<span class="mw-page-title-main">Haptic technology</span> Any form of interaction involving touch

Haptic technology is technology that can create an experience of touch by applying forces, vibrations, or motions to the user. These technologies can be used to create virtual objects in a computer simulation, to control virtual objects, and to enhance remote control of machines and devices (telerobotics). Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. The word haptic, from the Greek: ἁπτικός (haptikos), means "tactile, pertaining to the sense of touch". Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels.

<span class="mw-page-title-main">Touchscreen</span> Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an electronic device.

<span class="mw-page-title-main">Scrolling</span> Sliding motion vertically or horizontally over display devices

In computer displays, filmmaking, television production, and other kinetic displays, scrolling is sliding text, images or video across a monitor or display, vertically or horizontally. "Scrolling," as such, does not change the layout of the text or pictures but moves the user's view across what is apparently a larger image that is not wholly seen. A common television and movie special effect is to scroll credits, while leaving the background stationary. Scrolling may take place completely without user intervention or, on an interactive device, be triggered by touchscreen or a keypress and continue without further intervention until a further user action, or be entirely controlled by input devices.

An output device is any piece of computer hardware that converts information/DATA into a human-perceptible form or, historically, into a physical machine-readable form for use with other non-computerized equipment. It can be text, graphics, tactile, audio, or video. Examples include monitors, printers, speakers, headphones, projectors, GPS devices, optical mark readers, and braille readers.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. One area of the field is emotion recognition derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition is a path for computers to begin to better understand and interpret human body language, previously not possible through text or unenhanced graphical (GUI) user interfaces.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. A form of gesture recognition, capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">Pen computing</span> Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

In electrical engineering, capacitive sensing is a technology, based on capacitive coupling, that can detect and measure anything that is conductive or has a dielectric constant different from air. Many types of sensors use capacitive sensing, including sensors to detect and measure proximity, pressure, position and displacement, force, humidity, fluid level, and acceleration. Human interface devices based on capacitive sensing, such as touchpads, can replace the computer mouse. Digital audio players, mobile phones, and tablet computers will sometimes use capacitive sensing touchscreens as input devices. Capacitive sensors can also replace mechanical buttons.

Hands-on computing is a branch of human-computer interaction research which focuses on computer interfaces that respond to human touch or expression, allowing the machine and the user to interact physically. Hands-on computing can make complicated computer tasks more natural to users by attempting to respond to motions and interactions that are natural to human behavior. Thus hands-on computing is a component of user-centered design, focusing on how users physically respond to virtual environments.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

<span class="mw-page-title-main">DiamondTouch</span> Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

<span class="mw-page-title-main">Hardware interface design</span>

Hardware interface design (HID) is a cross-disciplinary design field that shapes the physical connection between people and technology in order to create new hardware interfaces that transform purely digital processes into analog methods of interaction. It employs a combination of filmmaking tools, software prototyping, and electronics breadboarding.

<span class="mw-page-title-main">Bird (technology)</span>

Bird is an interactive input device designed by Israel-based startup, MUV Interactive, which develops technology for wearable interfaces. Bird connects to computers to make any surface an interactive 3D environment. The device features remote touch, touchpad swipe control, gesture control, touchscreen capabilities, voice command recognition, a laser pointer, and other advanced options.

Tactile technology is the integration of multi-sensory triggers within physical objects, allowing "real world" interactions with technology. It is similar to haptic technology, as both focus on touch interactions with technology, but whereas haptic is simulated touch, tactile is physical touch. Rather than using a digital interface to interact with the physical world, as augmented reality does, tactile technology involves a physical interaction that triggers a digital response.