Ironies of Automation

Last updated

"Ironies of Automation" is a research paper written by Lisanne Bainbridge and published in Automatica in 1983, [1] and has been widely recognized as a pioneering statement of the problems inherent in automation.

Contents

Bainbridge argues that new, severe problems are caused by automating most of the work, while the human operator is responsible for tasks that can't be automated. Thus, operators will not practice skills as part of their ongoing work. Their work now also includes exhausting monitoring tasks. Thus, rather than needing less training, operators need to be trained more to be ready for the rare but crucial interventions.

Barry Strauch analyzes the paper's significance, observing that by November 2016 it had attracted 1800 citations, far beyond other influential works on the topic, and that "The number of citations of Bainbridge’s work, large as it is, is also increasing at a considerable rate." [2] Retrospectives on "Ironies of Automation" and its significance have appeared in both IEEE and ACM publications. [3]

Author

Lisanne Bainbridge is a cognitive psychologist, active in human factors research between the late 1960s and 1998. She obtained a doctorate in 1972 for work on process controllers [4] and went on to author various research on mental load, process operations, and related topics. She taught at University of Reading and University College London [5] (See the references in Strauch for a partial bibliography, or her home page link below.)

See https://www.complexcognition.co.uk/p/home.html, including note on relation to Rasmussen's work.

Related Research Articles

<span class="mw-page-title-main">Computer science</span> Study of computation

Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines.

<span class="mw-page-title-main">Automation</span> Use of various control systems for operating equipment

Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.

A captcha is a type of challenge–response test used in computing to determine whether the user is human in order to deter bot attacks and spam.

Human-centered computing (HCC) studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

Situational awareness or situation awareness (SA) is the understanding of an environment, its elements, and how it changes with respect to time or other factors. Situational awareness is important for effective decision making in many environments. It is formally defined as:

“the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”.

Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.

In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands.

Adaptive autonomy refers to a suggestion for the definition of the notation 'autonomy' in mobile robotics.

<span class="mw-page-title-main">Robotics</span> Design, construction, use, and application of robots


Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".

Manfred Morari is a world-leading control theorist who has made pioneering contributions to the theory and applications of Model Predictive Control, Internal Model Control (IMC) and Hybrid Systems. His book on Robust Process Control is considered to be definitive text on the subject. He is currently Peter and Susanne Armstrong Faculty Fellow at the University of Pennsylvania. He received his Ph.D. in Chemical Engineering from the University of Minnesota in 1977. Dr. Morari held positions at the University of Wisconsin, Madison (1977–1983), the California Institute of Technology (1983-1991), and the Swiss Federal Institute of Technology in Zurich ETH Zurich. He is considered a pioneer in field of Model Predictive Control, Control of Hybrid Systems, Internal Model Control (IMC), and robust control.

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.

Gunnar Johannsen is a German cyberneticist, and Emeritus Professor of Systems Engineering and Human-Machine Systems at the University of Kassel, known for his contributions in the field of human-machine systems.

Human performance modeling (HPM) is a method of quantifying human behavior, cognition, and processes. It is a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. It is a complementary approach to other usability testing methods for evaluating the impact of interface features on operator performance.

Robotic process automation (RPA) is a form of business process automation that is based on software robots (bots) or artificial intelligence (AI) agents. RPA should not be confused with artificial intelligence as it is based on automotive technology following a predefined workflow. It is sometimes referred to as software robotics.

Karl Henrik Johansson is a Swedish researcher and best known for his pioneering contributions to networked control systems, cyber-physical systems, and hybrid systems. His research has had particular application impact in transportation, automation, and energy networks. He holds a Chaired Professorship in Networked Control at the KTH Royal Institute of Technology in Stockholm, Sweden. He is Director of KTH Digital Futures.

Branislava Peruničić-Draženović is an Emeritus Professor of Control Engineering at the University of Sarajevo. She was elected to the Academy of Sciences and Arts of Bosnia and Herzegovina in 1986.

<span class="mw-page-title-main">Aude Billard</span> Swiss physicist

Aude G. Billard is a Swiss physicist in the fields of machine learning and human-robot interactions. As a full professor at the School of Engineering at Swiss Federal Institute of Technology in Lausanne (EPFL), Billard’s research focuses on applying machine learning to support robot learning through human guidance. Billard’s work on human-robot interactions has been recognized numerous times by the Institute of Electrical and Electronics Engineers (IEEE) and she currently holds a leadership position on the executive committee of the IEEE Robotics and Automation Society (RAS) as the vice president of publication activities.

Cognitive systems engineering (CSE) is a field of study that examines the intersection of people, work, and technology, with a focus on safety-critical systems. The central tenet of cognitive systems engineering is that it views a collection of people and technology as a single unit that is capable of cognitive work, which is called a joint cognitive system.

The out-of-the-loop performance problem arises when an operator suffers from performance decrement as a consequence of automation. The potential loss of skills and of situation awareness caused by vigilance and complacency problems might make operators of automated systems unable to operate manually in case of system failure. Highly automated systems reduce the operator to monitoring role, which diminishes the chances for the operator to understand the system. It is related to mind wandering.

References

  1. Bainbridge, Lisanne (1983-11-01). "Ironies of automation". Automatica. 19 (6): 775–779. doi:10.1016/0005-1098(83)90046-8. ISSN   0005-1098. S2CID   12667742.
  2. Strauch, B. (October 2018). "Ironies of Automation: Still Unresolved After All These Years". IEEE Transactions on Human-Machine Systems. 48 (5): 419–433. doi:10.1109/THMS.2017.2732506. S2CID   52280314.
  3. Baxter, Gordon; Rooksby, John; Wang, Yuanzhi; Khajeh-Hosseini, Ali (2012). "The ironies of automation". Proceedings of the 30th European Conference on Cognitive Ergonomics. ECCE '12. New York, NY, USA: ACM. pp. 65–71. doi:10.1145/2448136.2448149. ISBN   9781450317863. S2CID   15903320.
  4. Bainbridge, Lisanne (August 1999). "Verbal Reports As Evidence of the Process Operator's Knowledge". International Journal of Human-Computer Studies. 51 (2): 213–238. doi:10.1006/ijhc.1979.0307. ISSN   1071-5819.
  5. "Lisanne Bainbridge on data.bnf.fr".