Enactive interfaces

Last updated
Enactive human-machine interface translating the aspects of a knowledge base into modalities of perception for a human operator. The auditory, visual, and tactile presentations by the system respond to tactile input from the operator, which user input in turn depends upon the auditory, visual, and tactile feedback from the system. Enactive Human Machine Interface.png
Enactive human-machine interface translating the aspects of a knowledge base into modalities of perception for a human operator. The auditory, visual, and tactile presentations by the system respond to tactile input from the operator, which user input in turn depends upon the auditory, visual, and tactile feedback from the system.

Enactive interfaces are interactive systems that allow organization and transmission of knowledge obtained through action. Examples are interfaces that couple a human with a machine to do things usually done unaided, such as shaping a three-dimensional object using multiple modality interactions with a database, [2] or using interactive video to allow a student to visually engage with mathematical concepts. [3] Enactive interface design can be approached through the idea of raising awareness of affordances, that is, optimization of the awareness of possible actions available to someone using the enactive interface. [4] This optimization involves visibility, affordance, and feedback. [5] [6]

Contents

The enactive interface in the figure interprets manual input and provides a response in perceptual terms in the form of images, sounds, and haptic (tactile) feedback. The system is called enactive because of the feedback loop in which the system response is decided by the user input, and the user input is driven by the perceived system responses. [1]

Enactive interfaces are new types of human-computer interface that express and transmit the enactive knowledge by integrating different sensory aspects. The driving concept of enactive interfaces is then the fundamental role of motor action for storing and acquiring knowledge (action driven interfaces). Enactive interfaces are then capable of conveying and understanding gestures of the user, in order to provide an adequate response in perceptual terms. Enactive interfaces can be considered a new step in the development of the human-computer interaction because they are characterized by a closed loop between the natural gestures of the user (efferent component of the system) and the perceptual modalities activated (afferent component). Enactive interfaces can be conceived to exploit this direct loop and the capability of recognizing complex gestures.

The development of such interfaces requires the creation of a common vision between different research areas like computer vision, haptic and sound processing, giving more attention on the motor action aspect of interaction. An example of prototypical systems that are able to introduce enactive interfaces are reactive robots, robots that are always in contact with the human hand (like current play console controllers, Wii Remote) and are capable of interpreting the human movements and guiding the human for the completion of a manipulation task.

Enactive knowledge

Enactive knowledge is information gained through perception–action interaction in the environment. In many aspects the enactive knowledge is more natural than the other forms both in terms of the learning process and in the way it is applied in the world. Such knowledge is inherently multimodal because it requires the co-ordination of the various senses. Two key characteristics of enactive knowledge are that it is experential it relates to doing and depends on the user's experience, and it is cultural: the way of doing is itself dependent upon social aspects, attitudes, values, practices, and legacy. [1]

Enactive interfaces are related to a fundamental interaction concept that often is not exploited by existing human-computer interface technologies. As stated by cognitive psychologist Jerome Bruner, the traditional interaction with the information mediated by a computer is mostly based on symbolic or iconic knowledge, and not on enactive knowledge. [7] While in the symbolic way of learning knowledge is stored as words, mathematical symbols or other symbol systems, in the iconic stage knowledge is stored in the form of visual images, such as diagrams and illustrations that can accompany verbal information. On the other hand, enactive knowledge is a form of knowledge based on active participation, knowing by doing, by living rather than thinking. [8]

"Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)" [9]

A particular form of knowledge is a skill , juggling being a simple example, and the acquisition of a skill is one area where enactive knowledge is evident. The sensorimotor and cognitive activities involved in acquiring skills are tabulated by the SKILLS FP6 European skills project. [10]

Multimodal interfaces

Multimodal interfaces are a good candidate for the creation of Enactive interfaces because of their coordinated use of haptic, sound and vision. Such research is the main objective of the ENACTIVE Network of Excellence, a European consortium of more than 20 research laboratories that are joining their research effort for the definition, development and exploitation of enactive interfaces.

ENACTIVE Network of Excellence

The research on enactive knowledge and enactive interfaces is the objective of the ENACTIVE Network of Excellence. A Network of Excellence is a European Community research instrument that provides fundings for the integration of the research activities of different research laboratories and institutions. The ENACTIVE NoE started in 2004 with more than 20 partners with the objective of the creation of a multidisciplinary research community with the aim of structuring the research on a new generation of human-computer interfaces called Enactive Interfaces.. The aim of this NoE is not only the research on enactive interfaces by itself, but also the integration of the partners through a Virtual Laboratory and the spreading of the expertise and knowledge of the Network.

Since 2004, the partners, coordinated by the PERCRO laboratory, have improved both the theoretical aspects of enaction, through seminars and the creation of a lexicon, and the technological aspects necessary for the creation of enactive interfaces. Every year the status of the ENACTIVE NoE is presented through an international conference. [11]

See also

Related Research Articles

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

User interface Means by which a user interacts with and controls a machine

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." Beyond the digital aspect, interaction design is also useful when creating physical (non-digital) products, exploring how a user might interact with it. Common topics of interaction design include design, human–computer interaction, and software development. While interaction design has an interest in form, its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field as opposed to a science or engineering field.

WIMP (computing) Style of human-computer interaction

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

Interactivity Interaction between users and computers

Across the many fields concerned with interactivity, including information science, computer science, human-computer interaction, communication, and industrial design, there is little agreement over the meaning of the term "interactivity", but most definitions are related to interaction between users and computers and other machines through a user interface. Interactivity can however also refer to interaction between people. It nevertheless usually refers to interaction between people and computers – and sometimes to interaction between computers – through software, hardware, and networks.

Visualization (graphics) Set of techniques for creating images, diagrams, or animations to communicate a message

Visualization or visualisation is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.

The following outline is provided as an overview of and topical guide to human–computer interaction:

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

User interface design Planned operator–machine interaction

User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design is the process of building interfaces that are aesthetically pleasing. Designers aim to build interfaces that are easy and pleasant to use. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals.

Ecological interface design (EID) is an approach to interface design that was introduced specifically for complex sociotechnical, real-time, and dynamic systems. It has been applied in a variety of domains including process control, aviation, and medicine.

In user interface design, an interface metaphor is a set of user interface visuals, actions and procedures that exploit specific knowledge that users already have of other domains. The purpose of the interface metaphor is to give the user instantaneous knowledge about how to interact with the user interface. They are designed to be similar to physical entities but also have their own properties. They can be based on an activity, an object (skeuomorph), or a combination of both and work with users' familiar knowledge to help them understand 'the unfamiliar', and placed in the terms so the user may better understand.

Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination". "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

Hands-on computing is a branch of human-computer interaction research which focuses on computer interfaces that respond to human touch or expression, allowing the machine and the user to interact physically. Hands-on computing can make complicated computer tasks more natural to users by attempting to respond to motions and interactions that are natural to human behavior. Thus hands-on computing is a component of user-centered design, focusing on how users physically respond to virtual environments.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

Roberta "Bobby Lou" Klatzky is a Professor of Psychology at Carnegie Mellon University (CMU). She specializes in human perception and cognition, particularly relating to visual and non-visual perception and representation of space and geometric shapes. Klatzky received a B.A. in mathematics from the University of Michigan in 1968 and a Ph.D in psychology from Stanford University in 1972. She has done extensive research on human haptic and visual object recognition, navigation under visual and nonvisual guidance, and perceptually guided action.

Human–computer interaction Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways.

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Visuo-haptic mixed reality (VHMR) is a branch of mixed reality that has the ability of merging visual and tactile perceptions of both virtual and real objects with a collocated approach. The first known system to overlay augmented haptic perceptions on direct views of the real world is the Virtual Fixtures system developed in 1992 at the US Air Force Research Laboratories. Like any emerging technology, the development of the VHMR systems is accompanied by challenges that, in this case, deal with the efforts to enhance the multi-modal human perception with the user-computer interface and interaction devices at the moment available. Visuo-haptic mixed reality (VHMR) consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects and haptic devices necessary to provide haptic stimuli to the user while interacting with the virtual objects. A VHMR setup allows the user to perceive visual and kinesthetic stimuli in a co-located manner, i.e., the user can see and touch virtual objects at the same spatial location. This setup overcomes the limits of the traditional one, i.e, display and haptic device, because the visuo-haptic co-location of the user's hand and a virtual tool improve the sensory integration of multimodal cues and makes the interaction more natural. But it also comes with technological challenges in order to improve the naturalness of the perceptual experience.

Ken Hinckley is an American computer scientist and inventor. He is a senior principal research manager at Microsoft Research. He is known for his research in human-computer interaction, specifically on sensing techniques, pen computing, and cross-device interaction.

References

  1. 1 2 3 Monica Bordegoni (2010). "§4.4.2: PDP [Product Development Process] scenario based on user-centered design". In Shuichi Fukuda (ed.). Emotional Engineering: Service Development. Springer. p. 76. ISBN   9781849964234.
  2. 1 2 Monica Bordegoni (2010). "§4.5.2 Design tools based upon enactive interfaces". In Shuichi Fukuda (ed.). Emotional Engineering: Service Development. Springer. pp. 78 ff. ISBN   9781849964234.
  3. D Tall; D Smith; C Piez (2008). "Enactive control". In Mary Kathleen Heid; Glendon W Blume (eds.). Research on Technology and the Teaching and Learning of Mathematics. Information Age Publishing Inc. pp. 213 ff. ISBN   9781931576192.
  4. TA Stoffregen; BG Bardy; B Mantel (2006). "Affordances in the design of enactive systems" (PDF). Virtual Reality. 10 (1): 4–10. doi:10.1007/s10055-006-0025-7. S2CID   8334591.
  5. Debbie Stone; Caroline Jarrett; Mark Woodroffe; Shailey Minocha Morgan Kaufmann (2005). "Chapter 5; §3: Three principles from experience: visibility, affordance, and feedback". User Interface Design and Evaluation. Morgan Kaufmann. pp. 97 ff. ISBN   9780080520322.
  6. Elena Zudilova-Seinstra; Tony Adriaansen; Robert van Liere (2008). "Perceptual and design principles for effective interactive visualizations". Trends in Interactive Visualization: State-of-the-Art Survey. Springer. pp. 166 ff. ISBN   9781848002692.
  7. Bruner's list of six characteristics of iconic knowledge is found in Phillip T. Slee; Marilyn Campbell; Barbara Spears (2012). "Iconic representation". Child, Adolescent and Family Development. Cambridge University Press. p. 176. ISBN   9781107402164.
  8. Phillip T. Slee; Marilyn Campbell; Barbara Spears (2012). "Enactive representation". Child, Adolescent and Family Development. Cambridge University Press. p. 176. ISBN   9781107402164.
  9. Jerome Seymour Bruner (1966). Toward a Theory of Instruction (PDF). Harvard University Press. p. 44. ISBN   9780674897014. Archived from the original (PDF) on 2014-05-02. Retrieved 2014-05-16.. Quoted in J Bruner (2004). "Chapter 10: Sustaining mathematical activity". In John Mason; Sue Johnston-Wilder (eds.). Fundamental Constructs in Mathematics Education (Paperback ed.). Taylor & Francis. p. 260. ISBN   978-0415326988.
  10. B Bardy; D Delignières; J Lagarde; D Mottet; G Zelic (July 2010). "An enactive approach to perception-action and skill acquisition in virtual reality environments" (PDF). Third International Conference on Applied Human Factors and Ergonomics.
  11. "Research on haptic interfaces and virtual environments". PERCRO Perceptual Robotics Laboratory. Retrieved April 30, 2014.

Further reading