Human error

Last updated

Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". [1] Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power (e.g., the Three Mile Island accident), aviation, space exploration (e.g., the Space Shuttle Challenger disaster and Space Shuttle Columbia disaster), and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

Contents

Definition

A sign with a spelling mistake; the word "road" has been spelled incorrectly with a P instead of an R. Spelling Mistake - geograph.org.uk - 2242852.jpg
A sign with a spelling mistake; the word "road" has been spelled incorrectly with a P instead of an R.

Human error refers to something having been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". [1] In short, it is a deviation from intention, expectation or desirability. [1] Logically, human actions can fail to achieve their goal in two different ways: the actions can go as planned, but the plan can be inadequate (leading to mistakes); or, the plan can be satisfactory, but the performance can be deficient (leading to slips and lapses). [2] [3] However, a mere failure is not an error if there had been no plan to accomplish something in particular. [1]

Performance

The Custom House in Dublin, which was built the wrong way around (the side facing the Liffey was intended to be the side facing Gardiner Street). Dublin - The Custom House - 20180825021752.jpg
The Custom House in Dublin, which was built the wrong way around (the side facing the Liffey was intended to be the side facing Gardiner Street).

Human error and performance are two sides of the same coin: "human error" mechanisms are the same as "human performance" mechanisms; performance later categorized as 'error' is done so in hindsight: [3] [4] therefore actions later termed "human error" are actually part of the ordinary spectrum of human behaviour. The study of absent-mindedness in everyday life provides ample documentation and categorization of such aspects of behavior. While human error is firmly entrenched in the classical approaches to accident investigation and risk assessment, it has no role in newer approaches such as resilience engineering. [5]

Categories

There are many ways to categorize human error: [6] [7]

Sources

The cognitive study of human error is a very active research field, including work related to limits of memory and attention and also to decision making strategies such as the availability heuristic and other cognitive biases. Such heuristics and biases are strategies that are useful and often correct, but can lead to systematic patterns of error.

Misunderstandings as a topic in human communication have been studied in conversation analysis, such as the examination of violations of the cooperative principle and Gricean maxims.

Organizational studies of error or dysfunction have included studies of safety culture. One technique for analyzing complex systems failure that incorporates organizational analysis is management oversight risk tree analysis. [13] [14] [15]

Controversies

A statue in Hartlepool, England, commemorating the "Hartlepool monkey", a primate who was mistaken by locals to be a French soldier and killed. The Hartlepool Monkey - geograph.org.uk - 318321.jpg
A statue in Hartlepool, England, commemorating the "Hartlepool monkey", a primate who was mistaken by locals to be a French soldier and killed.

Some researchers have argued that the dichotomy of human actions as "correct" or "incorrect" is a harmful oversimplification of a complex phenomenon. [16] [17] A focus on the variability of human performance and how human operators (and organizations) can manage that variability may be a more fruitful approach. Newer approaches, such as resilience engineering mentioned above, highlight the positive roles that humans can play in complex systems. In resilience engineering, successes (things that go right) and failures (things that go wrong) are seen as having the same basis, namely human performance variability. A specific account of that is the efficiency–thoroughness trade-off principle, [18] which can be found on all levels of human activity, in individual as well as collective.

See also

Related Research Articles

<span class="mw-page-title-main">Systems engineering</span> Interdisciplinary field of engineering

Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.

<span class="mw-page-title-main">Safety engineering</span> Engineering discipline which assures that engineered systems provide acceptable levels of safety

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.

<span class="mw-page-title-main">Safety</span> State of being secure from harm, injury, danger, or other non-desirable outcomes

Safety is the state of being "safe", the condition of being protected from harm or other danger. Safety can also refer to the control of recognized hazards in order to achieve an acceptable level of risk.

In the field of human factors and ergonomics, human reliability is the probability that a human performs a task to a sufficient standard. Reliability of humans can be affected by many factors such as age, physical health, mental state, attitude, emotions, personal propensity for certain mistakes, and cognitive biases.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Cognitive ergonomics is a scientific discipline that studies, evaluates, and designs tasks, jobs, products, environments and systems and how they interact with humans and their cognitive abilities. It is defined by the International Ergonomics Association as "concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. Cognitive ergonomics is responsible for how work is done in the mind, meaning, the quality of work is dependent on the persons understanding of situations. Situations could include the goals, means, and constraints of work. The relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design." Cognitive ergonomics studies cognition in work and operational settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics.

A high reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes in an environment where normal accidents can be expected due to risk factors and complexity.

High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.

A system accident is an "unanticipated interaction of multiple failures" in a complex system. This complexity can either be of technology or of human organizations and is frequently both. A system accident can be easy to see in hindsight, but extremely difficult in foresight because there are simply too many action pathways to seriously consider all of them. Charles Perrow first developed these ideas in the mid-1980s. Safety systems themselves are sometimes the added complexity which leads to this type of accident.

<span class="mw-page-title-main">Accident analysis</span> Process to determine the causes of accidents to prevent recurrence

Accident analysis is a process carried out in order to determine the cause or causes of an accident so as to prevent further accidents of a similar kind. It is part of accident investigation or incident investigation. These analyses may be performed by a range of experts, including forensic scientists, forensic engineers or health and safety advisers. Accident investigators, particularly those in the aircraft industry, are colloquially known as "tin-kickers". Health and safety and patient safety professionals prefer using the term "incident" in place of the term "accident". Its retrospective nature means that accident analysis is primarily an exercise of directed explanation; conducted using the theories or methods the analyst has to hand, which directs the way in which the events, aspects, or features of accident phenomena are highlighted and explained. These analyses are also invaluable in determining ways to prevent future incidents from occurring. They provide good insight by determining root causes, into what failures occurred that lead to the incident.

The Technique for human error-rate prediction (THERP) is a technique that is used in the field of Human Reliability Assessment (HRA) to evaluate the probability of human error occurring throughout the completion of a task. From such an analysis, some corrective measures could be taken to reduce the likelihood of errors occurring within a system. The overall goal of THERP is to apply and document probabilistic methodological analyses to increase safety during a given process. THERP is used in fields such as error identification, error quantification and error reduction.

A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature".

The term use error has recently been introduced to replace the commonly used terms human error and user error. The new term, which has already been adopted by international standards organizations for medical devices, suggests that accidents should be attributed to the circumstances, rather than to the human beings who happened to be there.

Human factors are the physical or cognitive properties of individuals, or social behavior which is specific to humans, and influence functioning of technological systems as well as human-environment equilibria. The safety of underwater diving operations can be improved by reducing the frequency of human error and the consequences when it does occur. Human error can be defined as an individual's deviation from acceptable or desirable practice which culminates in undesirable or unexpected results.

Dive safety is primarily a function of four factors: the environment, equipment, individual diver performance and dive team performance. The water is a harsh and alien environment which can impose severe physical and psychological stress on a diver. The remaining factors must be controlled and coordinated so the diver can overcome the stresses imposed by the underwater environment and work safely. Diving equipment is crucial because it provides life support to the diver, but the majority of dive accidents are caused by individual diver panic and an associated degradation of the individual diver's performance. - M.A. Blumenberg, 1996

<span class="mw-page-title-main">Ergonomics</span> Designing systems to suit their users

Ergonomics, also known as human factors or human factors engineering (HFE), is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Primary goals of human factors engineering are to reduce human error, increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment.

A defence in depth uses multi-layered protections, similar to redundant protections, to create a reliable system despite any one layer's unreliability.

Collaborative Control Theory (CCT) is a collection of principles and models for supporting the effective design of collaborative e-Work systems. Beyond human collaboration, advances in information and communications technologies, artificial intelligence, multi-agent systems, and cyber physical systems have enabled cyber-supported collaboration in highly distributed organizations of people, robots, and autonomous systems. The fundamental premise of CCT is: without effective augmented collaboration by cyber support, working in parallel to and in anticipation of human interactions, the potential of emerging activities such as e-Commerce, virtual manufacturing, telerobotics, remote surgery, building automation, smart grids, cyber-physical infrastructure, precision agriculture, and intelligent transportation systems cannot be fully and safely materialized. CCT addresses the challenges and emerging solutions of such cyber-collaborative systems, with emphasis on issues of computer-supported and communication-enabled integration, coordination and augmented collaboration. CCT is composed of eight design principles: (1) Collaboration Requirement Planning (CRP); (2) e-Work Parallelism (EWP); (3) Keep It Simple, System (KISS); (4) Conflict/Error Detection and Prevention (CEDP); (5) Fault Tolerance by Teaming (FTT); (6) Association/Dissociation (AD); (7) Dynamic Lines of Collaboration (DLOC); and (8) Best Matching (BM).

David D. Woods is an American safety systems researcher who studies human coordination and automation issues in a wide range safety-critical fields such as nuclear power, aviation, space operations, critical care medicine, and software services. He is one of the founding researchers of the fields of cognitive systems engineering and resilience engineering.

Dr. Richard I. Cook was a system safety researcher, physician, anesthesiologist, university professor, and software engineer. Cook did research in safety, incident analysis, cognitive systems engineering, and resilience engineering across a number of fields, including critical care medicine, aviation, air traffic control, space operations, semiconductor manufacturing, and software services.

Resilience engineering is a subfield of safety science research that focuses on understanding how complex adaptive systems cope when encountering a surprise. The term resilience in this context refers to the capabilities that a system must possess in order to deal effectively with unanticipated events. Resilience engineering examines how systems build, sustain, degrade, and lose these capabilities.

References

  1. 1 2 3 4 Senders, J.W. and Moray, N.P. (1991) Human Error: Cause, Prediction, and Reduction . Lawrence Erlbaum Associates, p.25. ISBN   0-89859-598-3.
  2. Hollnagel, E. (1993) Human Reliability Analysis Context and Control. Academic Press Limited. ISBN   0-12-352658-2.
  3. 1 2 3 Reason, James (1990) Human Error . Cambridge University Press. ISBN   0-521-31419-4.
  4. Woods, 1990
  5. Hollnagel, E., Woods, D. D. & Leveson, N. G. (2006). Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate
  6. Jones, 1999
  7. Wallace and Ross, 2006
  8. Senders and Moray, 1991
  9. Roth et al., 1994
  10. Sage, 1992
  11. Norman, 1988
  12. DOE HDBK-1028-2009 ( https://www.standards.doe.gov/standards-documents/1000/1028-BHdbk-2009-v1/@@images/file)
  13. Rasmussen, Jens; Pejtersen, Annelise M.; Goodstein, L.P. (1994). Cognitive Systems Engineering. John Wiley & Sons. ISBN   0471011983.
  14. "The Management Oversight and Risk Tree (MORT)". International Crisis Management Association. Archived from the original on 27 September 2014. Retrieved 1 October 2014.
  15. Entry for MORT on the FAA Human Factors Workbench
  16. Hollnagel, E. (1983). "Human error. (Position Paper for NATO Conference on Human Error, August 1983, Bellagio, Italy".
  17. Hollnagel, E. and Amalberti, R. (2001). The Emperor's New Clothes, or whatever happened to "human error"? Invited keynote presentation at 4th International Workshop on Human Error, Safety and System Development.. Linköping, June 11–12, 2001.
  18. Hollnagel, Erik (2009). The ETTO principle : efficiency-thoroughness trade-off : why things that go right sometimes go wrong. Farnham, England Burlington, VT: Ashgate. ISBN   978-0-7546-7678-2. OCLC   432428967.