A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials. [1]
This was first conceived by Radia Perlman as a new programming language that would teach much younger children similar to Logo, but using special "keyboards" and input devices. Another pioneer in tangible user interfaces is Hiroshi Ishii, a professor at the MIT who heads the Tangible Media Group at the MIT Media Lab. His particular vision for tangible UIs, called Tangible Bits, is to give physical form to digital information, making bits directly manipulable and perceptible. Tangible bits pursues the seamless coupling between physical objects and virtual data.
There are several frameworks describing the key characteristics of tangible user interfaces. Brygg Ullmer and Hiroshi Ishii describe six characteristics concerning representation and control: [2]
Eva Hornecker and Jacob Buur describe a structured framework with four themes: [3]
According to Mi Jeong Kim and Mary Lou Maher, the five basic defining properties of tangible user interfaces are as follows: [4]
A tangible user interface must be differentiated from a graphical user interface (GUI). A GUI exists only in the digital world, whereas a TUI connects the digital with the physical world. For example, a screen displays the digital information, whereas a mouse allows us to directly interact with this digital information. [5] A tangible user interface represents the input directly in the physical world, and makes the digital information directly graspable. [6]
A tangible user interface is usually built for one specific target group, because of the low range of possible application areas. Therefore, the design of the interface must be developed together with the target group to ensure a good user experience. [7]
In comparison with a TUI, a GUI has a wide range of usages in one interface. Because of that it targets a large group of possible users. [7]
One advantage of the TUI is the user experience, because it occurs a physical interaction between the user and the interface itself (E.g.: SandScape: Building your own landscape with sand). Another advantage is usability, because the user knows intuitively how to use the interface by knowing the function of the physical object. So, the user does not need to learn the functionality. That is why the Tangible User interface is often used to make technology more accessible for elderly people. [6]
Interface type/attributes | Tangible user interface | Graphical user interface |
---|---|---|
Range of possible application areas | Build for one specific application area | Build for many kinds of application areas |
How the system is driven | Physical objects, such as a mouse or a keyboard | Based on graphical bits, such as pixels on the screen |
Coupling between cognitive bits and the physical output | Unmediated connection | Indirect connection |
How user experience is driven | The user already knows the function of the interface by knowing how the physical objects function | The user explores the functionality of the interface |
User behavior when approaching the system | Intuition | Recognition |
A simple example of tangible UI is the computer mouse: Dragging the mouse over a flat surface moves a pointer on the screen accordingly. There is a very clear relationship about the behaviors shown by a system with the movements of a mouse. Other examples include:
Several approaches have been made to establish a generic middleware for TUIs. They target toward the independence of application domains as well as flexibility in terms of the deployed sensor technology. For example, Siftables provides an application platform in which small gesture sensitive displays act together to form a human-computer interface.
For collaboration support, TUIs have to allow the spatial distribution, asynchronous activities, and the dynamic modification of the TUI infrastructure, to name the most prominent ones. This approach presents a framework based on the LINDA tuple space concept to meet these requirements. The implemented TUIpist framework deploys arbitrary sensor technology for any type of application and actuators in distributed environments. [11]
Interest in tangible user interfaces (TUIs) has grown constantly since the 1990s, and with every year, more tangible systems are showing up. A 2017 white paper outlines the evolution of TUIs for touch table experiences and raises new possibilities for experimentation and development. [12]
In 1999, Gary Zalewski patented a system of moveable children's blocks containing sensors and displays for teaching spelling and sentence composition. [13]
Tangible Engine is a proprietary authoring application used to build object-recognition interfaces for projected-capacitive touch tables. The Tangible Engine Media Creator allows users with little or no coding experience to quickly create TUI-based experiences.
The MIT Tangible Media Group, headed by Hiroshi Ishi is continuously developing and experimenting with TUIs including many tabletop applications. [14]
The Urp [15] system and the more advanced Augmented Urban Planning Workbench [16] allow digital simulations of air flow, shadows, reflections, and other data based on the positions and orientations of physical models of buildings, on the table surface.
Newer developments go even one step further and incorporate the third dimension by allowing a user to form landscapes with clay (Illuminating Clay [17] ) or sand (Sand Scape [18] ). Again different simulations allow the analysis of shadows, height maps, slopes and other characteristics of the interactively formable landmasses.
InfrActables is a back projection collaborative table that allows interaction by using TUIs that incorporate state recognition. Adding different buttons to the TUIs enables additional functions associated to the TUIs. Newer versions of the technology can even be integrated into LC-displays [19] by using infrared sensors behind the LC matrix.
The Tangible Disaster [20] allows the user to analyze disaster measures and simulate different kinds of disasters (fire, flood, tsunami,.) and evacuation scenarios during collaborative planning sessions. Physical objects allow positioning disasters by placing them on the interactive map and additionally tuning parameters (i.e. scale) using dials attached to them.
The commercial potential of TUIs has been identified recently. The repeatedly awarded Reactable, [21] an interactive tangible tabletop instrument, is now distributed commercially by Reactable Systems, a spinoff company of the Pompeu Fabra University, where it was developed. With the Reactable users can set up their own instrument interactively, by physically placing different objects (representing oscillators, filters, modulators...) and parametrise them by rotating and using touch-input.
Microsoft is distributing its novel Windows-based platform Microsoft Surface [22] (now Microsoft PixelSense) since 2009. Beside multi-touch tracking of fingers, the platform supports the recognition of physical objects by their footprints. Several applications, mainly for the use in commercial space, have been presented. Examples range from designing an own individual graphical layout for a snowboard or skateboard to studying the details of a wine in a restaurant by placing it on the table and navigating through menus via touch input. Interactions such as the collaborative browsing of photographs from a handycam or cell phone that connects seamlessly once placed on the table are also supported.
Another notable interactive installation is instant city [23] that combines gaming, music, architecture and collaborative aspects. It allows the user to build three-dimensional structures and set up a city with rectangular building blocks, which simultaneously results in the interactive assembly of musical fragments of different composers.
The development of the Reactable and the subsequent release of its tracking technology reacTIVision [24] under the GNU/GPL as well as the open specifications of the TUIO protocol have triggered an enormous amount of developments based on this technology.
In the last few years, many amateur and semi-professional projects outside of academia and commerce have been started. Due to open source tracking technologies (reacTIVision [24] ) and the ever-increasing computational power available to end-consumers, the required infrastructure is now accessible to almost everyone. A standard PC, webcam, and some handicraft work allows individuals to set up tangible systems with a minimal programming and material effort. This opens doors to novel ways of perceiving human-computer interaction and allows for new forms of creativity for the public to experiment with.[ citation needed ]
It is difficult to keep track and overlook the rapidly growing number of all these systems and tools, but while many of them seem only to utilize the available technologies and are limited to initial experiments and tests with some basic ideas or just reproduce existing systems, a few of them open out into novel interfaces and interactions and are deployed in public space or embedded in art installations. [25]
The Tangible Factory Planning [26] is a tangible table based on reacTIVision [24] that allows to collaboratively plan and visualize production processes in combination with plans of new factory buildings and was developed within a diploma thesis.
Another example of the many reacTIVision-based tabletops is ImpulsBauhaus-Interactive Table [27] and was on exhibition at the Bauhaus-University in Weimar marking the 90th anniversary of the establishment of Bauhaus. Visitors could browse and explore the biographies, complex relations and social networks between members of the movement.
Using principles derived from embodied cognition, cognitive load theory, and embodied design TUIs have been shown to increase learning performance by offering multimodal feedback. [28] However, these benefits for learning require forms of interaction design that leave as much cognitive capacity as possible for learning.
A physical icon, or phicon, is the tangible computing equivalent of an icon in a traditional graphical user interface, or GUI. Phicons hold a reference to some digital object and thereby convey meaning. [29] [30] [31]
Physical icons were first used as tangible interfaces in the metaDesk project built in 1997 by Professor Hiroshi Ishii's tangible bits research group at MIT. [32] [33] The metaDesk consisted of a table whose surface showed a rear-projected video image. Placing a phicon on the table triggered sensors that altered the video projection. [34]
In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.
Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." While interaction design has an interest in form, its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field.
In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
In computer science, interactive computing refers to software which accepts input from the user as it runs.
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.
In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.
Hiroshi Ishii is a Japanese computer scientist. He is a professor at the Massachusetts Institute of Technology. Ishii pioneered the Tangible User Interface in the field of Human-computer interaction with the paper "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", co-authored with his then PhD student Brygg Ullmer.
The Deutsch limit is an aphorism about the information density of visual programming languages originated by L. Peter Deutsch that states:
In computing, post-WIMP comprises work on user interfaces, mostly graphical user interfaces, which attempt to go beyond the paradigm of windows, icons, menus and a pointing device, i.e. WIMP interfaces.
A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.
A smart object is an object that enhances the interaction with not only people but also with other smart objects. Also known as smart connected products or smart connected things (SCoT), they are products, assets and other things embedded with processors, sensors, software and connectivity that allow data to be exchanged between the product and its environment, manufacturer, operator/user, and other products and systems. Connectivity also enables some capabilities of the product to exist outside the physical device, in what is known as the product cloud. The data collected from these products can be then analyzed to inform decision-making, enable operational efficiencies and continuously improve the performance of the product.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.
Sergi Jordà is a Catalan innovator, installation artist, digital musician and Associate Professor at the Music Technology Group, Universitat Pompeu Fabra in Barcelona. He is best known for directing the team that invented the Reactable. He is also a trained Physicist.
Wendy Elizabeth Mackay is a Canadian researcher specializing in human-computer interaction. She has served in all of the roles on the SIGCHI committee, including Chair. She is a member of the CHI Academy and a recipient of a European Research Council Advanced grant. She has been a visiting professor in Stanford University between 2010 and 2012, and received the ACM SIGCHI Lifetime Service Award in 2014.
Sharon Oviatt is an internationally recognized computer scientist, professor and researcher known for her work in the field of human–computer interaction on human-centered multimodal interface design and evaluation.
Responsive computer-aided design is an approach to computer-aided design (CAD) that utilizes real-world sensors and data to modify a three-dimensional (3D) computer model. The concept is related to cyber-physical systems through blurring of the virtual and physical worlds, however, applies specifically to the initial digital design of an object prior to production.
Bruno Zamborlin is an AI researcher, entrepreneur and artist based in London, working in the field of human-computer interaction. His work focuses on converting physical objects into touch-sensitive, interactive surfaces using vibration sensors and artificial intelligence. In 2013 he founded Mogees Limited a start-up to transform everyday objects into musical instruments and games using a vibration sensor and a mobile phone. With HyperSurfaces, he converts physical surfaces of any material, shape and form into data-enabled-interactive surfaces using a vibration sensor and a coin-sized chipset. As an artist, he has created art installations around the world, with his most recent work comprising a unique series of "sound furnitures" that was showcased at the Italian Pavilion of the Venice Biennale 2023. He regularly performed with UK-based electronic music duo Plaid. He is also honorary visiting research fellow at Goldsmiths, University of London.