Human–computer interaction

Last updated

Human–computer interaction (HCI) researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their seminal 1983 book, The Psychology of Human–Computer Interaction, although the authors first used the term in 1980 [1] and the first known use was in 1975. [2] The term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving nails but not much else), a computer has many uses and this takes place as an open-ended dialog between the user and the computer. The notion of dialog likens human–computer interaction to human-to-human interaction, an analogy which is crucial to theoretical considerations in the field. [3] [4]

User (computing) person who uses a computer or network service

A user is a person who utilizes a computer or network service. Users of computer systems and software products generally lack the technical expertise required to fully understand how they work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.

Computer science study of the theoretical foundations of information and computation

Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.

Design can have different connotations in different fields of application, but there are two basic meanings of design: as a verb and as a noun.

Contents

Introduction

Humans interact with computers in many ways; the interface between humans and computers is crucial to facilitating this interaction. Desktop applications, internet browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today. [5] Voice user interfaces (VUI) are used for speech recognition and synthesizing systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms. The growth in human–computer interaction field has been in quality of interaction, and in different branching in its history. Instead of designing regular interfaces, the different research branches have had a different focus on the concepts of multimodality [6] rather than unimodality, intelligent adaptive interfaces rather than command/action based ones, and finally active rather than passive interfaces.[ citation needed ]

Interaction kind of action that occurs as two or more objects have an effect upon one another

Interaction is a kind of action that occur as two or more objects have an effect upon one another. The idea of a two-way effect is essential in the concept of interaction, as opposed to a one-way causal effect. A closely related term is interconnectivity, which deals with the interactions of interactions within systems: combinations of many simple interactions can lead to surprising emergent phenomena. Interaction has different tailored meanings in various sciences. Changes can also involve interaction.

Graphical user interface user interface allowing interaction through graphical icons and visual indicators

The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.

A voice-user interface (VUI) makes spoken human interaction with computers possible, using speech recognition to understand spoken commands and questions, and typically text to speech to play a reply. A voice command device (VCD) is a device controlled with a voice user interface.

The Association for Computing Machinery (ACM) defines human–computer interaction as "a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them". [5] An important facet of HCI is user satisfaction (or simply End User Computing Satisfaction). "Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant." [5] Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes termed human–machine interaction (HMI), man-machine interaction (MMI) or computer-human interaction (CHI).

The Association for Computing Machinery (ACM) is an international learned society for computing. It was founded in 1947, and is the world's largest scientific and educational computing society. The ACM is a non-profit professional membership group, with nearly 100,000 members as of 2019. Its headquarters are in New York City.

Computer graphics graphics created using computers

Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with the help of specialized graphical hardware and software. It is a vast and recently developed area of computer science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery (CGI).

Operating system collection of software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs.

Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster. [7] [8] [9] Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instrument or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout and thus the conceptually good idea actually had undesirable results.

Three Mile Island accident nuclear accident

The Three Mile Island accident was the partial meltdown of reactor number 2 of Three Mile Island Nuclear Generating Station (TMI-2) in Dauphin County, Pennsylvania, near Harrisburg and subsequent radiation leak that occurred on March 28, 1979. It was the most significant accident in U.S. commercial nuclear power plant history. The incident was rated a five on the seven-point International Nuclear Event Scale: Accident with wider consequences.

Goals for computers

Human–computer interaction studies the ways in which humans make, or do not make, use of computational artifacts, systems and infrastructures. In doing so, much of the research in the field seeks to improve human–computer interaction by improving the usability of computer interfaces. [10] How usability is to be precisely understood, how it relates to other social and cultural values and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated. [11] [12]

Much of the research in the field of human–computer interaction takes an interest in:

Library (computing) collection of non-volatile resources used by computer programs, often to develop software; software component that can be reused by other software for a specific purpose

In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets.

In psychology, cognitivism is a theoretical framework for understanding the mind that gained credence in the 1950s. The movement was a response to behaviorism, which cognitivists said neglected to explain cognition. Cognitive psychology derived its name from the Latin cognoscere, referring to knowing and information, thus cognitive psychology is an information-processing psychology derived in part from earlier traditions of the investigation of thought and problem solving.

Ethnomethodology

Ethnomethodology is the study of methods people use for understanding and producing the social order in which they live. It generally seeks to provide an alternative to mainstream sociological approaches. In its most radical form, it poses a challenge to the social sciences as a whole. On the other hand, its early investigations led to the founding of conversation analysis, which has found its own place as an accepted discipline within the academy. According to Psathas, it is possible to distinguish five major approaches within the ethnomethodological family of disciplines.

Visions of what researchers in the field seek to achieve vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.

Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.

HCI differs from human factors and ergonomics as HCI focuses more on users working specifically with computers, rather than other kinds of machines or designed artifacts. There is also a focus in HCI on how to implement the computer software and hardware mechanisms to support human–computer interaction. Thus, human factors is a broader term; HCI could be described as the human factors of computers – although some experts try to differentiate these areas.

HCI also differs from human factors in that there is less of a focus on repetitive work-oriented tasks and procedures, and much less emphasis on physical stress and the physical form or industrial design of the user interface, such as keyboards and mouse devices.

Three areas of study have substantial overlap with HCI even as the focus of inquiry shifts. Personal information management (PIM) studies how people acquire and use personal information (computer based and other) to complete tasks. In computer-supported cooperative work (CSCW), emphasis is placed on the use of computing systems in support of the collaborative work. The principles of human interaction management (HIM) extend the scope of CSCW to an organizational level and can be implemented without use of computers.

Design

Principles

The user interacts directly with hardware for the human input and output such as displays, e.g. through a graphical user interface. The user interacts with the computer over this software interface using the given input and output (I/O) hardware.
Software and hardware must be matched, so that the processing of the user input is fast enough, the latency of the computer output is not disruptive to the workflow. Linux kernel INPUT OUPUT evdev gem USB framebuffer.svg
The user interacts directly with hardware for the human input and output such as displays, e.g. through a graphical user interface. The user interacts with the computer over this software interface using the given input and output (I/O) hardware.
Software and hardware must be matched, so that the processing of the user input is fast enough, the latency of the computer output is not disruptive to the workflow.

When evaluating a current user interface, or designing a new user interface, it is important to keep in mind the following experimental design principles:

Repeat the iterative design process until a sensible, user-friendly interface is created. [15]

Methodologies

A number of diverse methodologies outlining techniques for human–computer interaction design have emerged since the rise of the field in the 1980s. Most design methodologies stem from a model for how users, designers, and technical systems interact. Early methodologies, for example, treated users' cognitive processes as predictable and quantifiable and encouraged design practitioners to look to cognitive science results in areas such as memory and attention when designing user interfaces. Modern models tend to focus on a constant feedback and conversation between users, designers, and engineers and push for technical systems to be wrapped around the types of experiences users want to have, rather than wrapping user experience around a completed system.

Display designs

Displays are human-made artifacts designed to support the perception of relevant system variables and to facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g. navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information that a system generates and displays; therefore, the information must be displayed according to principles in a manner that will support perception, situation awareness, and understanding.

Thirteen principles of display design

Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering. [19]

These principles of human perception and information processing can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved through utilization of these principles.

Certain principles may not be applicable to different displays or situations. Some principles may seem to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design. [20]

Perceptual principles

1. Make displays legible (or audible). A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, then the operator cannot effectively make use of them.

2. Avoid absolute judgment limits. Do not ask the user to determine the level of a variable on the basis of a single sensory variable (e.g. colour, size, loudness). These sensory variables can contain many possible levels.

3. Top-down processing. Signals are likely perceived and interpreted in accordance with what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.

4. Redundancy gain. If a signal is presented more than once, it is more likely that it will be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g. colour and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as colour and position are redundant.

5. Similarity causes confusion: Use distinguishable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed and dissimilar features should be highlighted.

Mental model principles

6. Principle of pictorial realism. A display should look like the variable that it represents (e.g. high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like it would in the represented environment.

7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user's mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.

Principles based on attention

8. Minimizing information access cost or interaction cost. When the user's attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing for frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost.

9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g. proximity, linkage by common colours, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter.

10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information.

Memory principles

11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, the use of memory may sometimes benefit the user by eliminating the need to reference some type of knowledge in the world (e.g., an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user's head and knowledge in the world must be balanced for an effective design.

12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should attempt to eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the use of the user's mental resources. This will allow the user to focus on current conditions, and to consider possible future conditions. An example of a predictive aid is a road sign displaying the distance to a certain destination.

13. Principle of consistency. Old habits from other displays will easily transfer to support processing of new displays if they are designed consistently. A user's long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.

Human–computer interface

The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it, including:

Current research

Topics in human-computer interaction include:

User customization

End-user development studies how ordinary users could routinely tailor applications to their own needs and to invent new applications based on their understanding of their own domains. With their deeper knowledge, users could increasingly be important sources of new applications at the expense of generic programmers with systems expertise but low domain expertise.

Embedded computation

Computation is passing beyond computers into every object for which uses can be found. Embedded systems make the environment alive with little computations and automated processes, from computerized cooking appliances to lighting and plumbing fixtures to window blinds to automobile braking systems to greeting cards. The expected difference in the future is the addition of networked communications that will allow many of these embedded computations to coordinate with each other and with the user. Human interfaces to these embedded devices will in many cases be disparate from those appropriate to workstations.

Augmented reality

Augmented reality refers to the notion of layering relevant information into our vision of the world. Existing projects show real-time statistics to users performing difficult tasks, such as manufacturing. Future work might include augmenting our social interactions by providing additional information about those we converse with.

Social computing

In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name. [21] Other research finds that individuals perceive their interactions with computers more positively than humans, despite behaving the same way towards these machines. [22]

Knowledge-driven human–computer interaction

In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors. Ontology (information science), as a formal representation of domain-specific knowledge, can be used to address this problem, through solving the semantic ambiguities between the two parties. [23]

Emotions and human-computer interaction

In the interaction of humans and computers, research has studied how computers can detect, process and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. [24] The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human-computer interaction. [25] The influence of emotions in human-computer interaction has been studied in fields such as financial decision making using ECG [26] [27] and organisational knowledge sharing using eye tracking and face readers as affect-detection channels. [28] In these fields it has been shown that affect-detection channels have the potential to detect human emotions and that information systems can incorporate the data obtained from affect-detection channels to improve decision models.

Brain–computer interfaces

Factors of change

Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated by D. Engelbart: "If ease of use was the only valid criterion, people would stick to tricycles and never try bicycles." [29]

The means by which humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include:

As of 2010 the future for HCI is expected [30] to include the following characteristics:

Scientific conferences

One of the main conferences for new research in human–computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems , usually referred to by its short name CHI (pronounced kai, or khai). CHI is organized by ACM Special Interest Group on Computer–Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners and industry people, with company sponsors such as Google, Microsoft, and PayPal.

There are also dozens of other smaller, regional or specialized HCI-related conferences held around the world each year, including: [31]

See also

Footnotes

  1. Card, Stuart K.; Thomas P. Moran; Allen Newell (July 1980). "The keystroke-level model for user performance time with interactive systems". Communications of the ACM. 23 (7): 396–410. doi:10.1145/358886.358895.
  2. Carlisle, James H. (June 1976). "Evaluating the impact of office automation on top management communication". Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS '76. Proceedings of the June 7–10, 1976, National Computer Conference and Exposition. pp. 611–616. doi:10.1145/1499799.1499885. Use of 'human–computer interaction' appears in references
  3. Suchman, Lucy (1987). Plans and Situated Action. The Problem of Human–Machine Communication. New York, Cambridge: Cambridge University Press. ISBN   9780521337397 . Retrieved 7 March 2015.
  4. Dourish, Paul (2001). Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT Press. ISBN   9780262541787.
  5. 1 2 3 Hewett; Baecker; Card; Carey; Gasen; Mantei; Perlman; Strong; Verplank. "ACM SIGCHI Curricula for Human–Computer Interaction". ACM SIGCHI. Retrieved 15 July 2014.
  6. "Multimodality", Wikipedia, 2019-01-02, retrieved 2019-01-03
  7. Ergoweb. "What is Cognitive Ergonomics?". Ergoweb.com. Retrieved August 29, 2011.
  8. "NRC: Backgrounder on the Three Mile Island Accident". Nrc.gov. Retrieved August 29, 2011.
  9. http://www.threemileisland.org/downloads/188.pdf
  10. Grudin, Jonathan (1992). "Utility and usability: research issues and development contexts". Interacting with Computers. 4 (2): 209–217. doi:10.1016/0953-5438(92)90005-z . Retrieved 7 March 2015.
  11. Chalmers, Matthew; Galani, Areti (2004). Seamful interweaving: heterogeneity in the theory and design of interactive systems. Proceedings of the 5th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques. pp. 243–252. doi:10.1145/1013115.1013149. ISBN   978-1581137873 . Retrieved 7 March 2015.
  12. Barkhuus, Louise; Polichar, Valerie E. (2011). "Empowerment through seamfulness: smart phones in everyday life". Personal and Ubiquitous Computing. 15 (6): 629–639. doi:10.1007/s00779-010-0342-4.
  13. Rogers, Yvonne (2012). "HCI Theory: Classical, Modern, and Contemporary". Synthesis Lectures on Human-Centered Informatics. 5 (2): 1–129. doi:10.2200/S00418ED1V01Y201205HCI014.
  14. Sengers, Phoebe; Boehner, Kirsten; David, Shay; Joseph, Kaye (2005). Reflective Design. CC '05 Proceedings of the 4th Decennial Conference on Critical Computing: Between Sense and Sensibility. 5. pp. 49–58. doi:10.1145/1094562.1094569. ISBN   978-1595932037 . Retrieved 7 March 2015.
  15. Green, Paul (2008). Iterative Design. Lecture presented in Industrial and Operations Engineering 436 (Human Factors in Computer Systems, University of Michigan, Ann Arbor, MI, February 4, 2008.
  16. Kaptelinin, Victor (2012): Activity Theory. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "Encyclopedia of Human–Computer Interaction". The Interaction-Design.org Foundation. Available online at http://www.interaction-design.org/encyclopedia/activity_theory.html
  17. "The Case for HCI Design Patterns".
  18. Friedman, B., Kahn Jr, P. H., Borning, A., & Kahn, P. H. (2006). Value Sensitive Design and information systems. Human–Computer Interaction and Management Information Systems: Foundations. ME Sharpe, New York, 348–372.
  19. Wickens, Christopher D., John D. Lee, Yili Liu, and Sallie E. Gordon Becker. An Introduction to Human Factors Engineering. Second ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2004. 185–193.
  20. Brown, C. Marlin. Human–Computer Interface Design Guidelines. Intellect Books, 1998. 2–3.
  21. Posard, Marek (2014). "Status processes in human–computer interactions: Does gender matter?". Computers in Human Behavior. 37 (37): 189–195. doi:10.1016/j.chb.2014.04.025.
  22. Posard, Marek; Rinderknecht, R. Gordon (2015). "Do people like working with computers more than human beings?". Computers in Human Behavior. 51: 232–238. doi:10.1016/j.chb.2015.04.057.
  23. Dong, Hai; Hussain, Farookh; Elizabeth, Chang (2010). "A human-centered semantic service platform for the digital ecosystems environment". World Wide Web. 13 (1–2): 75–103. doi:10.1007/s11280-009-0081-5.
  24. Calvo, R., & D'Mello S. (2010). "Affect detection: An interdisciplinary review of models, methods, and their applications". IEEE Transactions on Affective Computing. 1 (1).CS1 maint: Multiple names: authors list (link)
  25. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). "Emotion recognition in human-computer interaction". IEEE Signal Processing Magazine. 18 (1).CS1 maint: Multiple names: authors list (link)
  26. Astor, Philipp J.; Adam, Marc T. P.; Jerčić, Petar; Schaaff, Kristina; Weinhardt, Christof (December 2013). "Integrating Biosignals into Information Systems: A NeuroIS Tool for Improving Emotion Regulation". Journal of Management Information Systems. 30 (3): 247–278. doi:10.2753/mis0742-1222300309. ISSN   0742-1222.
  27. Adam, Marc T. P.; Krämer, Jan; Weinhardt, Christof (December 2012). "Excitement Up! Price Down! Measuring Emotions in Dutch Auctions". International Journal of Electronic Commerce. 17 (2): 7–40. doi:10.2753/jec1086-4415170201. ISSN   1086-4415.
  28. Fehrenbacher, Dennis D (2017). "Affect Infusion and Detection through Faces in Computer-mediated Knowledge-sharing Decisions". Journal of the Association for Information Systems. 18 (10). ISSN   1536-9323.
  29. Fischer, Gerhard (1 May 2000). "User Modeling in Human–Computer Interaction". User Modeling and User-Adapted Interaction. 11 (1–2): 65–86. doi:10.1023/A:1011145532042.
  30. SINHA, Gaurav; SHAHI, Rahul; SHANKAR, Mani. Human Computer Interaction. In: Emerging Trends in Engineering and Technology (ICETET), 2010 3rd International Conference on. IEEE, 2010. p. 1-4.
  31. "Conference Search: hci". www.confsearch.org.

Further reading

Academic overviews of the field
Historically important classic[ citation needed ]
Overviews of history of the field
Social science and HCI
Academic journals
Collection of papers
Treatments by one or few authors, often aimed at a more general audience
Textbooks

Related Research Articles

Interaction design, often abbreviated as IxD, is "the practice of designing interactive digital products, environments, systems, and services." Beyond the digital aspect, interaction design is also useful when creating physical (non-digital) products, exploring how a user might interact with it. Common topics of interaction design include design, human–computer interaction, and software development. While interaction design has an interest in form, its main area of focus rests on behavior. Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field as opposed to a science or engineering field.

WIMP (computing)

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. It was coined by Merzouga Wilberts in 1980. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

The following outline is provided as an overview of and topical guide to human–computer interaction:

Gesture recognition

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at this point will move accordingly. This could make conventional input on devices such and even redundant.

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

Ben Shneiderman American computer scientist

Ben Shneiderman is an American computer scientist, a Distinguished University Professor in the Department of Computer Science, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park, and the founding director (1983-2000) of the University of Maryland Human-Computer Interaction Lab. He conducted fundamental research in the field of human–computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design.

In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.

Andrew Sears is an American computer scientist. He is Professor and Dean of the College of Information Sciences and Technology (IST) at The Pennsylvania State University.

Exploratory search is a specialization of information exploration which represents the activities carried out by searchers who are:

Gender HCI is a subfield of human-computer interaction that focuses on the design and evaluation of interactive systems for humans. The specific emphasis in gender HCI is on differences in how males and females interact with computers.

John Millar Carroll is a distinguished professor of Information Sciences and Technology at Pennsylvania State University where he previously served as the Edward Frymoyer Chair of Information Sciences and Technology. Carroll is perhaps best known for his theory of Minimalism in computer instruction, training, and technical communication.

Fabio Paternò is Research Director and Head of the Laboratory on Human Interfaces in Information Systems at Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche in Pisa, Italy.

Interaction technique

An interaction technique, user interface technique or input technique is a combination of hardware and software elements that provides a way for computer users to accomplish a single task. For example, one can go back to the previously visited page on a Web browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech command. It is a widely used term in human-computer interaction. In particular, the term "new interaction technique" is frequently used to introduce a novel user interface design idea.

DiamondTouch Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Animal-computer interaction (ACI) is a field of research which studies the design and use of technology with, for and by animals. ACI emerged from, and is heavily influenced by, the discipline of human-computer interaction (HCI).

Yves Guiard is a French cognitive neuroscientist and researcher best known for his work in human laterality and stimulus-response compatibility in the field of human-computer interaction. He is the director of research at French National Center for Scientific Research and a member of CHI Academy since 2016. He is also an associate editor of ACM Transactions on Computer-Human Interaction and member of the Advisory Council of the International Association for the Study of Attention and Performance.

Joëlle Coutaz is a French computer scientist, specializing in human-computer interaction (HCI). Her career includes research in the fields of operating systems and HCI, as well as being a professor at the University of Grenoble. Coutaz is considered a pioneer in HCI in France, and in 2007, she was awarded membership to SIGCHI. She was also involved in organizing CHI conferences and was a member on the editorial board of ACM Transactions on Computer-Human Interaction. She has authored over 130 publications, including two books, in the domain of human-computer interaction.

Susanne Bødker is a Danish computer scientist known for her contributions to human–computer interaction, computer-supported cooperative work, and participatory design, including the introduction of activity theory to human–computer interaction. She is a professor of computer science at Aarhus University, and a member of the CHI Academy.