Human-agent team

Last updated

A human-agent team is a system composed of multiple interacting humans and artificial intelligence systems. The artificial intelligence system may be a robotic system, a decision support system, or a virtual agent. Human agent teaming provides an interaction paradigm that differs from traditional approaches such as supervisory control, or user interface design, by enabling the computer to have a certain degree of autonomy. The paradigm draws from various scientific research fields, being strongly inspired by the way humans work together in teams, and constituting a special type of multi-agent system.

Contents

Concept

Software agents that behave as artificial team players satisfy the following general requirements: [1]

To satisfy these OPD requirements, agents exhibit various behaviors such as:

The engineering efforts to develop artificial team members include user interface design, but also the design of specialized social artificial intelligence, that enables agents to reason about whether some piece of information is worthy of sharing.

Frameworks

Various frameworks have been developed that support the software engineering effort of building human agent teams, such as KAoS, [2] and SAIL. [3] Engineering methodologies for human agent teaming include Coactive design [4]

Applications

Human agent teaming is a popular paradigm to approach the interaction between humans and AI technologies in various domains such as defense, healthcare, space, disaster response.

Related Research Articles

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

<span class="mw-page-title-main">Luc Steels</span>

Luc Steels is a Belgian scientist and artist. Steels is considered a pioneer of Artificial Intelligence in Europe who has made contributions to expert systems, behavior-based robotics, artificial life and evolutionary computational linguistics. He was a fellow of the Catalan Institution for Research and Advanced Studies ICREA associated as a research professor with the Institute for Evolutionary Biology (UPF/CSIC) in Barcelona. He was formerly founding Director of the Artificial Intelligence Laboratory of the Vrije Universiteit Brussel and founding director of the Sony Computer Science Laboratory in Paris. Steels has also been active in the arts collaborating with visual artists and theater makers and composing music for opera.

In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency.

<span class="mw-page-title-main">WIMP (computing)</span> Style of human-computer interaction

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

The following outline is provided as an overview of and topical guide to human–computer interaction:

<span class="mw-page-title-main">Multi-agent system</span> Built of multiple interacting agents

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.

Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In intelligence and artificial intelligence, an intelligent agent (IA) is an agent acting in an intelligent manner. It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.

In artificial intelligence, an embodied agent, also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. A branch of artificial intelligence focuses on empowering such agents to interact autonomously with human beings and the environment. Mobile robots are one example of physically embodied agents; Ananova and Microsoft Agent are examples of graphically embodied agents. Embodied conversational agents are embodied agents that are capable of engaging in conversation with one another and with humans employing the same verbal and nonverbal means that humans do.

Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.

Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198). "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.

Ekaterini Panagiotou Sycara is a Greek computer scientist. She is an Edward Fredkin Research Professor of Robotics in the Robotics Institute, School of Computer Science at Carnegie Mellon University internationally known for her research in artificial intelligence, particularly in the fields of negotiation, autonomous agents and multi-agent systems. She directs the Advanced Agent-Robotics Technology Lab at Robotics Institute, Carnegie Mellon University. She also serves as academic advisor for PhD students at both Robotics Institute and Tepper School of Business.

Adaptive autonomy refers to a suggestion for the definition of the notation 'autonomy' in mobile robotics.

<span class="mw-page-title-main">Robotics</span> Design, construction, use, and application of robots

Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

William J. Clancey is an American computer scientist who specializes in cognitive science and artificial intelligence. He has worked in computing in a wide range of sectors, including medicine, education, and finance, and had performed research that brings together cognitive and social science to study work practices and examine the design of agent systems. Clancey has been described as having developed “some of the earliest artificial intelligence programs for explanation, the critiquing method of consultation, tutorial discourse, and student modeling,” and his research has been described as including “work practice modeling, distributed multiagent systems, and the ethnography of field science.” He has also participated in Mars Exploration Rover mission operations, “simulation of a day-in-the-life of the ISS, knowledge management for future launch vehicles, and developing flight systems that make automation more transparent.” Clancey’s work on "heuristic classification" and "model construction operators" is regarded as having been influential in the design of expert systems and instructional programs.

<span class="mw-page-title-main">Artificial life</span> Field of study

Artificial life is a field of study wherein researchers examine systems related to natural life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry. The discipline was named by Christopher Langton, an American theoretical biologist, in 1986. In 1987, Langton organized the first conference on the field, in Los Alamos, New Mexico. There are three main kinds of alife, named for their approaches: soft, from software; hard, from hardware; and wet, from biochemistry. Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena.

Radhika Nagpal is an Indian-American computer scientist and researcher in the fields of self-organising computer systems, biologically-inspired robotics, and biological multi-agent systems. She is the Augustine Professor in Engineering in the Departments of Mechanical and Aerospace Engineering and Computer Science at Princeton University. Formerly, she was the Fred Kavli Professor of Computer Science at Harvard University and the Harvard School of Engineering and Applied Sciences. In 2017, Nagpal co-founded a robotics company under the name of Root Robotics. This educational company works to create many different opportunities for those unable to code to learn how.

Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology.

References

  1. Johnson, Matthew; Bradshaw, Jeffrey M.; Feltovich, Paul J.; Jonker, Catholijn M.; Van Riemsdijk, M. Birna; Sierhuis, Maarten (2014-03-01). "Coactive Design: Designing Support for Interdependence in Joint Activity". Journal of Human-Robot Interaction. 3 (1): 43. doi: 10.5898/jhri.3.1.johnson . ISSN   2163-0364.
  2. Bradshaw, Jeffrey M.; Sierhuis, Maarten; Acquisti, Alessandro; Feltovich, Paul; Hoffman, Robert; Jeffers, Renia; Prescott, Debbie; Suri, Niranjan; Uszok, Andrzej (2003), "Adjustable Autonomy and Human-Agent Teamwork in Practice: An Interim Report on Space Applications", Multiagent Systems, Artificial Societies, and Simulated Organizations, Springer US, pp. 243–280, doi:10.1007/978-1-4419-9198-0_11, ISBN   9781461348337
  3. van der Vecht, Bob; van Diggelen, Jurriaan; Peeters, Marieke; Barnhoorn, Jonathan; van der Waa, Jasper (2018), "SAIL: A Social Artificial Intelligence Layer for Human-Machine Teaming", Advances in Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection, Springer International Publishing, pp. 262–274, doi:10.1007/978-3-319-94580-4_21, ISBN   9783319945798
  4. Klein, G.; Woods, D.D.; Bradshaw, J.M.; Hoffman, R.R.; Feltovich, P.J. (2004). "Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity". IEEE Intelligent Systems. 19 (6): 91–95. doi:10.1109/mis.2004.74. ISSN   1541-1672. S2CID   27049933.