Human–robot collaboration

Last updated

Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology. [1] [2]

Contents

Industrial applications of human-robot collaboration involve Collaborative Robots, or cobots, that physically interact with humans in a shared workspace to complete tasks such as collaborative manipulation or object handovers. [3]

Collaborative Activity

Collaborative human-robot sawing. In such tasks, the robot has to control complex physical interactions and simultaneously adapt to the human partner. Human-Robot-Collaboration-Sawing-2016-Luka-Peternel.jpg
Collaborative human-robot sawing. In such tasks, the robot has to control complex physical interactions and simultaneously adapt to the human partner.

Collaboration is defined as a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal. [5] The process typically involves shared plans, shared norms and mutually beneficial interactions. [6] Although collaboration and cooperation are often used interchangeably, collaboration differs from cooperation as it involves a shared goal and joint action where the success of both parties depend on each other. [7]

For effective human-robot collaboration, it is imperative that the robot is capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction. [8] The robot must also communicate its own set of intents and goals to establish and maintain a set of shared beliefs and to coordinate its actions to execute the shared plan. [5] [9] In addition, all team members demonstrate commitment to doing their own part, to the others doing theirs, and to the success of the overall task. [9] [10]

Theories Informing Human-Robot Collaboration

Human-human collaborative activities are studied in depth in order to identify the characteristics that enable humans to successfully work together. [11] These activity models usually aim to understand how people work together in teams, how they form intentions and achieve a joint goal. Theories on collaboration inform human-robot collaboration research to develop efficient and fluent collaborative agents. [12]

Belief Desire Intention Model

The belief-desire-intention (BDI) model is a model of human practical reasoning that was originally developed by Michael Bratman. [13] The approach is used in intelligent agents research to describe and model intelligent agents. [14] The BDI model is characterized by the implementation of an agent's beliefs (the knowledge of the world, state of the world), desires (the objective to accomplish, desired end state) and intentions (the course of actions currently under execution to achieve the desire of the agent) in order to deliberate their decision-making processes. [15] BDI agents are able to deliberate about plans, select plans and execute plans.

Shared Cooperative Activity

Shared Cooperative Activity defines certain prerequisites for an activity to be considered shared and cooperative: mutual responsiveness, commitment to the joint activity and commitment to mutual support. [9] [16] An example case to illustrate these concepts would be a collaborative activity where agents are moving a table out the door, mutual responsiveness ensures that movements of the agents are synchronized; a commitment to the joint activity reassures each team member that the other will not at some point drop his side; and a commitment to mutual support deals with possible breakdowns due to one team member’s inability to perform part of the plan. [9]

Joint Intention Theory

Joint Intention Theory proposes that for joint action to emerge, team members must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. [17] In collaborative work, agents should be able to count on the commitment of other members, therefore each agent should inform the others when they reach the conclusion that a goal is achievable, impossible, or irrelevant. [9]

Approaches to Human-Robot Collaboration

The approaches to human-robot collaboration include human emulation (HE) and human complementary (HC) approaches. Although these approaches have differences, there are research efforts to develop a unified approach stemming from potential convergences such as Collaborative Control. [18] [19]

Human Emulation

The human emulation approach aims to enable computers to act like humans or have human-like abilities in order to collaborate with humans. It focuses on developing formal models of human-human collaboration and applying these models to human-computer collaboration. In this approach, humans are viewed as rational agents who form and execute plans for achieving their goals and infer other people's plans. Agents are required to infer the goals and plans of other agents, and collaborative behavior consists of helping other agents to achieve their goals. [18]

Human Complementary

The human complementary approach seeks to improve human-computer interaction by making the computer a more intelligent partner that complements and collaborates with humans. The premise is that the computer and humans have fundamentally asymmetric abilities. Therefore, researchers invent interaction paradigms that divide responsibility between human users and computer systems by assigning distinct roles that exploit the strengths and overcome the weaknesses of both partners. [18]

Key Aspects

Specialization of Roles: Based on the level of autonomy and intervention, there are several human-robot relationships including master-slave, supervisor–subordinate, partner–partner, teacher–learner and fully autonomous robot. In addition to these roles, homotopy (a weighting function that allows a continuous change between leader and follower behaviors) was introduced as a flexible role distribution. [20]

Establishing shared goal(s): Through direct discussion about goals or inference from statements and actions, agents must determine the shared goals they are trying to achieve. [18]

Allocation of Responsibility and Coordination: Agents must decide how to achieve their goals, determine what actions will be done by each agent, and how to coordinate the actions of individual agents and integrate their results. [18]

Shared context: Agents must be able to track progress toward their goals. They must keep track of what has been achieved and what remains to be done. They must evaluate the effects of actions and determine whether an acceptable solution has been achieved. [18]

Communication: Any collaboration requires communication to define goals, negotiate over how to proceed and who will do what, and evaluate progress and results. [18]

Adaptation and learning: Collaboration over time require partners to adapt themselves to each other and learn from one's partner both directly or indirectly. [4] [18]

Time and space: The time-space taxonomy divides human-robot interaction into four categories based on whether the humans and robots are using computing systems at the same time (synchronous) or different times (asynchronous) and while in the same place (collocated) or in different places (non-collocated). [21] [22]

Ergonomics: Human factors and ergonomics are one of the key aspects for a sustainable human-robot collaboration. The robot control system can use biomechanical models and sensors to optimize various ergonomic metrics, such as muscle fatigue. [4] [23]

See also

Related Research Articles

<span class="mw-page-title-main">Collective action</span> Action taken together by a group of people to further a common objective

Collective action refers to action taken together by a group of people whose goal is to enhance their condition and achieve a common objective. It is a term that has formulations and theories in many areas of the social sciences including psychology, sociology, anthropology, political science and economics.

Distributed artificial intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.

In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency.

<span class="mw-page-title-main">Multi-agent system</span> Built of multiple interacting agents

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans and executing those plans. A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.

Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.

Computer-supported collaborative learning (CSCL) is a pedagogical approach wherein learning takes place via social interaction using a computer or through the Internet. This kind of learning is characterized by the sharing and construction of knowledge among participants using technology as their primary means of communication or as a common resource. CSCL can be implemented in online and classroom learning environments and can take place synchronously or asynchronously.

Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior.

End-user development (EUD) or end-user programming (EUP) refers to activities and tools that allow end-users – people who are not professional software developers – to program computers. People who are not professional developers can use EUD tools to create or modify software artifacts and complex data objects without significant knowledge of a programming language. In 2005 it was estimated that by 2012 there would be more than 55 million end-user developers in the United States, compared with fewer than 3 million professional programmers. Various EUD approaches exist, and it is an active research topic within the field of computer science and human-computer interaction. Examples include natural language programming, spreadsheets, scripting languages, visual programming, trigger-action programming and programming by example.

Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology.

JACK Intelligent Agents is a framework in Java for multi-agent system development. JACK Intelligent Agents was built by Agent Oriented Software Pty. Ltd. (AOS) and is a third generation agent platform building on the experiences of the Procedural Reasoning System (PRS) and Distributed Multi-Agent Reasoning System (dMARS). JACK is one of the few multi-agent systems that uses the BDI software model and provides its own Java-based plan language and graphical planning tools.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".

<span class="mw-page-title-main">Sarit Kraus</span> Israeli computer scientist

Sarit Kraus is a professor of computer science at the Bar-Ilan University in Israel. She was named the 2020-2021 ACM Athena Lecturer in recognition of her contributions to artificial intelligence, notably to multiagent systems, human-agent interaction, autonomous agents and non-monotonic reasoning, as well as her leadership in these fields.

Deliberative agent is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions are made via symbolic reasoning".

Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999). It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive. In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals. Early implementations of adaptive collaborative control centered on vehicle teleoperation. Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design.

<span class="mw-page-title-main">Collective intentionality</span> Intentionality that occurs when two or more individuals undertake a task together

In the philosophy of mind, collective intentionality characterizes the intentionality that occurs when two or more individuals undertake a task together. Examples include two individuals carrying a heavy table up a flight of stairs or dancing a tango.

Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent. Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.

Collaborative Control Theory (CCT) is a collection of principles and models for supporting the effective design of collaborative e-Work systems. Beyond human collaboration, advances in information and communications technologies, artificial intelligence, multi-agent systems, and cyber physical systems have enabled cyber-supported collaboration in highly distributed organizations of people, robots, and autonomous systems. The fundamental premise of CCT is: without effective augmented collaboration by cyber support, working in parallel to and in anticipation of human interactions, the potential of emerging activities such as e-Commerce, virtual manufacturing, telerobotics, remote surgery, building automation, smart grids, cyber-physical infrastructure, precision agriculture, and intelligent transportation systems cannot be fully and safely materialized. CCT addresses the challenges and emerging solutions of such cyber-collaborative systems, with emphasis on issues of computer-supported and communication-enabled integration, coordination and augmented collaboration. CCT is composed of eight design principles: (1) Collaboration Requirement Planning (CRP); (2) e-Work Parallelism (EWP); (3) Keep It Simple, System (KISS); (4) Conflict/Error Detection and Prevention (CEDP); (5) Fault Tolerance by Teaming (FTT); (6) Association/Dissociation (AD); (7) Dynamic Lines of Collaboration (DLOC); and (8) Best Matching (BM).

A human-agent team is a system composed of multiple interacting humans and artificial intelligence systems. The artificial intelligence system may be a robotic system, a decision support system, or a virtual agent. Human agent teaming provides an interaction paradigm that differs from traditional approaches such as supervisory control, or user interface design, by enabling the computer to have a certain degree of autonomy. The paradigm draws from various scientific research fields, being strongly inspired by the way humans work together in teams, and constituting a special type of multi-agent system.

References

  1. Bauer, Andrea; Wollherr, Dirk; Buss, Martin (2008). "Human–Robot Collaboration: A Survey". International Journal of Humanoid Robotics. 05: 47–66. doi:10.1142/S0219843608001303.
  2. Rega, Andrea; Di Marino, Castrese; Pasquariello, Agnese; Vitolo, Ferdinando; Patalano, Stanislao; Zanella, Alessandro; Lanzotti, Antonio (20 December 2021). "Collaborative Workplace Design: A Knowledge-Based Approach to Promote Human–Robot Collaboration and Multi-Objective Layout Optimization". Applied Sciences. 11 (24): 12147. doi: 10.3390/app112412147 .
  3. Cakmak, Maya; Hoffman, Guy; Thomaz, Andrea (2016). "Computational Human-Robot Interaction". Foundations and Trends in Robotics. 4 (2–3): 104–223. doi:10.1561/2300000049.
  4. 1 2 3 Peternel, Luka; Tsagarakis, Nikos; Caldwell, Darwin; Ajoudani, Arash (2018). "Robot adaptation to human physical fatigue in human–robot co-manipulation". Autonomous Robots. 42 (5). Springer: 1011–1021. doi: 10.1007/s10514-017-9678-1 .
  5. 1 2 Grosz, Barbara J.; Kraus, Sarit (1996). "Collaborative plans for complex group action". Artificial Intelligence. 86 (2): 269–357. doi: 10.1016/0004-3702(95)00103-4 .
  6. Thomson, A. M.; Perry, J. L.; Miller, T. K. (2007). "Conceptualizing and Measuring Collaboration". Journal of Public Administration Research and Theory. 19: 23–56. doi:10.1093/jopart/mum036. S2CID   17586940.
  7. Hord, S. M. (1981). Working Together: Cooperation or Collaboration? Communications Services, Research and Development Center for Teacher Education, Education Annex 3.203, University of Texas, Austin, TX 78712-1288
  8. Chandrasekaran, Balasubramaniyan; Conrad, James M. (2015). "Human-robot collaboration: A survey". Southeast Con 2015. pp. 1–8. doi:10.1109/SECON.2015.7132964. ISBN   978-1-4673-7300-5. S2CID   39665543.
  9. 1 2 3 4 5 Hoffman, Guy; Breazeal, Cynthia (2004). "Collaboration in Human-Robot Teams". AIAA 1st Intelligent Systems Technical Conference. doi:10.2514/6.2004-6434. ISBN   978-1-62410-080-2. S2CID   1114471.
  10. Levesque, Hector J.; Cohen, Philip R.; Nunes, José H. T. (1990). "On acting together". Proceedings of the eighth National conference on Artificial intelligence - Volume 1 (AAAI'90). Vol. 1. AAAI. pp. 94–99. ISBN   978-0-262-51057-8.
  11. Roy, Someshwar; Edan, Yael (2018-03-27). "Investigating Joint-Action in Short-Cycle Repetitive Handover Tasks: The Role of Giver Versus Receiver and its Implications for Human-Robot Collaborative System Design". International Journal of Social Robotics. 12 (5): 973–988. doi:10.1007/s12369-017-0424-9. ISSN   1875-4805. S2CID   149855145.
  12. Someshwar, Roy; Edan, Yael (2017-08-30). "Givers & Receivers perceive handover tasks differently: Implications for Human-Robot collaborative system design". arXiv: 1708.06207 [cs.HC].
  13. Bratman, Michael (1987). Intention, Plans, and Practical Reason. Center for the Study of Language and Information.[ page needed ]
  14. Georgeff, Michael; Pell, Barney; Pollack, Martha; Tambe, Milind; Wooldridge, Michael (1999). "The Belief-Desire-Intention Model of Agency". Intelligent Agents V: Agents Theories, Architectures, and Languages. Lecture Notes in Computer Science. Vol. 1555. pp. 1–10. doi:10.1007/3-540-49057-4_1. ISBN   978-3-540-65713-2.
  15. Mascardi, V., Demergasso, D., & Ancona, D. (2005). Languages for Programming BDI-style Agents: an Overview. WOA.[ page needed ]
  16. Bratman, Michael E. (1992). "Shared Cooperative Activity". The Philosophical Review. 101 (2): 327–341. doi:10.2307/2185537. JSTOR   2185537.
  17. Cohen, Philip R.; Levesque, Hector J. (1991). "Teamwork". Noûs. 25 (4): 487. doi:10.2307/2216075. JSTOR   2216075.
  18. 1 2 3 4 5 6 7 8 Terveen, Loren G. (1995). "Overview of human-computer collaboration". Knowledge-Based Systems. 8 (2–3): 67–81. doi:10.1016/0950-7051(95)98369-H.
  19. Fong, Terrence; Thorpe, Charles; Baur, Charles (2003). "Collaboration, Dialogue, Human-Robot Interaction". Robotics Research (PDF). Springer Tracts in Advanced Robotics. Vol. 6. pp. 255–266. doi:10.1007/3-540-36460-9_17. ISBN   978-3-540-00550-6.
  20. Jarrassé, Nathanaël; Sanguineti, Vittorio; Burdet, Etienne (2014). "Slaves no longer: Review on role assignment for human–robot joint motor action" (PDF). Adaptive Behavior. 22: 70–82. doi:10.1177/1059712313481044. S2CID   1124463.
  21. Ellis, Clarence A.; Gibbs, Simon J.; Rein, Gail (1991). "Groupware: Some issues and experiences". Communications of the ACM. 34: 39–58. doi:10.1145/99977.99987. S2CID   13597491.
  22. Yanco, H. A.; Drury, J. (2004). "Classifying human-robot interaction: An updated taxonomy". 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583). Vol. 3. pp. 2841–2846. doi:10.1109/ICSMC.2004.1400763. ISBN   978-0-7803-8567-2. S2CID   17278270.
  23. Peternel, Luka; Fang, Cheng; Tsagarakis, Nikos; Ajoudani, Arash (2019). "A selective muscle fatigue management approach to ergonomic human-robot co-manipulation". Robotics and Computer-Integrated Manufacturing. 58. Elsevier: 69–79. doi:10.1016/j.rcim.2019.01.013. S2CID   115598682.