This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Adaptive autonomy refers to a suggestion for the definition of the notation 'autonomy' in mobile robotics. [1]
The extremist idea of "eliminate the human from the field" rendered the ironies of automation [2] to the extent that the researchers in the related fields shifted the paradigm to the idea of "best-fit autonomy for the computers", to provide more humane automation solutions.
One of the first human-machine function-allocation methods was presented by P. M. Fitts in 1951, which was used in automation systems design. [3] Nevertheless, the function allocation concept remains problematic after half a century, and the basic validity of formal function allocation methods has been challenged repeatedly. [4] [5] [6] [7]
The peripheral situations affect the performance of cybernetic systems; therefore, though one-shot human-centered automation (HCA) designs might provide better results than the systems designed based on the "automate it as possible" philosophy; however, these designs fail to maintain the advantages of the HCA designs, when the peripheral situations change. [8] [9]
Consequently, the automation solutions should be smart enough to adapt the level of automation (LOA) to the changes in peripheral situations. This concept is known as adaptive automation [10] or adjustable autonomy; [11] however, the term "adaptive autonomy" (AA) [12] [13] [14] is widely considered more appropriate to prevent confusion with the phrases like adaptive control and adaptive automation in systems control terminology.
Fitts's law is a predictive model of human movement primarily used in human–computer interaction and ergonomics. The law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was initially developed by Paul Fitts.
Autonomic computing (AC) is distributed computing resources with self-managing characteristics, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.
The following outline is provided as an overview of and topical guide to human–computer interaction:
Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″ It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.
A mobile robot is an automatic machine that is capable of locomotion. Mobile robotics is usually considered to be a subfield of robotics and information engineering.
Thomas B. Sheridan is American professor of mechanical engineering and Applied Psychology Emeritus at the Massachusetts Institute of Technology. He is a pioneer of robotics and remote control technology.
Supervisory control is a general term for control of many individual controllers or control loops, such as within a distributed control system. It refers to a high level of overall monitoring of individual process controllers, which is not necessary for the operation of each controller, but gives the operator an overall plant process view, and allows integration of operation between controllers.
Neuroergonomics is the application of neuroscience to ergonomics. Traditional ergonomic studies rely predominantly on psychological explanations to address human factors issues such as: work performance, operational safety, and workplace-related risks. Neuroergonomics, in contrast, addresses the biological substrates of ergonomic concerns, with an emphasis on the role of the human nervous system.
The following outline is provided as an overview of and topical guide to automation:
Adaptable Robotics refers to a field of robotics with a focus on creating robotic systems capable of adjusting their hardware and software components to perform a wide range of tasks while adapting to varying environments. The 1960s introduced robotics into the industrial field. Since then, the need to make robots with new forms of actuation, adaptability, sensing and perception, and even the ability to learn stemmed the field of adaptable robotics. Significant developments such as the PUMA robot, manipulation research, soft robotics, swarm robotics, AI, cobots, bio-inspired approaches, and more ongoing research have advanced the adaptable robotics field tremendously. Adaptable robots are usually associated with their development kit, typically used to create autonomous mobile robots. In some cases, an adaptable kit will still be functional even when certain components break.
Daniela L. Rus is a roboticist and computer scientist, Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology. She is the author of the books Computing the Future and The Heart and the Chip.
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
A mobile manipulator is a robot system built from a robotic manipulator arm mounted on a mobile platform.
Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999). It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive. In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals. Early implementations of adaptive collaborative control centered on vehicle teleoperation. Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design.
Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent. Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.
Human performance modeling (HPM) is a method of quantifying human behavior, cognition, and processes. It is a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. It is a complementary approach to other usability testing methods for evaluating the impact of interface features on operator performance.
Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology.
Semi-automation is a process or procedure that is performed by the combined activities of man and machine with both human and machine steps typically orchestrated by a centralized computer controller.
Saverio Mascolo is an Italian information engineer, academic and researcher. He is the former Head of the Department of Electrical Engineering and Information Science and the professor of Automatic Control at Department of Ingegneria Elettrica e dell'Informazione (DEI) at Politecnico di Bari, Italy.
The out-of-the-loop performance problem arises when an operator suffers from performance decrement as a consequence of automation. The potential loss of skills and of situation awareness caused by vigilance and complacency problems might make operators of automated systems unable to operate manually in case of system failure. Highly automated systems reduce the operator to monitoring role, which diminishes the chances for the operator to understand the system. It is related to mind wandering.