Use error

Last updated

The term use error has recently been introduced to replace the commonly used terms human error and user error . The new term, which has already been adopted by international standards organizations for medical devices (see #Use errors in health care below for references), suggests that accidents should be attributed to the circumstances, rather than to the human beings who happened to be there.

Contents

The need for the terminological change

The term "use error" was first used in May 1995 in an MD+DI guest editorial, “The Issue Is ‘Use,’ Not ‘User,’ Error,” by William Hyman. [1] Traditionally, human errors are considered as a special aspect of human factors. Accordingly, they are attributed to the human operator, or user. When taking this approach, we assume that the system design is perfect, and the only source for the use errors is the human operator. For example, the U.S. Department of Defense (DoD) HFACS [2] classifies use errors attributed to the human operator, disregarding improper design and configuration setting, which often result in missing alarms, or in inappropriate alerting.

The need for changing the term was due to a common malpractice of the stakeholders (the responsible organizations, the authorities, journalists) in cases of accidents. [3] Instead of investing in fixing the error-prone design, management attributed the error to the users. The need for the change has been pointed out by the accident investigators:

Use errors vs. force majeure

A mishap is typically considered as either a use error or a force majeure: [8]

Use errors in health care

In 1998, Cook, Woods and Miller presented the concept of hindsight bias, exemplified by celebrated accidents in medicine, by a workgroup on patient safety . [10] The workgroup pointed at the tendency to attribute accidents in health care to isolated human failures. They provide references to early research about the effect of knowledge of the outcome, which was unavailable beforehand, on later judgement about the processes that led up to that outcome. They explain that in looking back, we tend to oversimplify the situation that the actual practitioners faces. They conclude focusing on the hindsight knowledge prevents our understanding of the richer story, the circumstances of the human error.

According to this position, the term Use Error is formally defined in several international standards, such as IEC 62366, ISO 14155 and ISO 14971, to describe

an act or omission of an act that results in a different medical device response than intended by the manufacturer or expected by the user.

ISO standards about medical devices and procedures provide examples of use errors, which are attributed to human factors, include slips, lapses and mistakes. Practically, this means that they are attributed to the user, implying the user's accountability. The U.S. Food and Drug Administration glossary of medical devices provides the following explanation about this term: [11]

"Safe and effective use of a medical device means that users do not make errors that lead to injury and they achieve the desired medical treatment. If safe and effective use is not achieved, use error has occurred. Why and how use error occurs is a human factors concern."

With this interpretation by ISO and the FDA, the term ‘use error’ is actually synonymous with ‘user error’. Another approach, which distinguishes ‘use errors’ from ‘user errors', is taken by IEC 62366. Annex A includes an explanation justifying the new term:

"This International Standard uses the concept of use error. This term was chosen over the more commonly used term of “human error” because not all errors associated with the use of medical device are the result of oversight or carelessness of the part of the user of the medical device. Much more commonly, use errors are the direct result of poor user interface design."

This explanation complies with “The New View”, which Sidney Dekker suggested as an alternative to “The Old View”. This interpretation favors investigations intended to understand the situation, rather than blaming the operators.

In a 2011 report draft on health IT usability, the U.S. National Institute of Standards and Technology (NIST) defines "use error" in healthcare IT this way: “Use error is a term used very specifically to refer to user interface designs that will engender users to make errors of commission or omission. It is true that users do make errors, but many errors are due not to user error per se but due to designs that are flawed, e.g., poorly written messaging, misuse of color-coding conventions, omission of information, etc.". [12]

Example of user error

An example of an accident due to a user error is the ecological disaster of 1967 caused by the Torrey Canyon supertanker. The accident was due to a combination of several exceptional events, the result of which was that the supertanker was heading directly to the rocks. At that point, the captain failed to change the course because the steering control lever was inadvertently set to the Control position, which disconnected the rudder from the wheel at the helm. [13]

Examples of user failure to handle system failure

Examples of the second type are the Three Mile Island accident described above, the NYC blackout following a storm and the chemical plant disaster in Bhopal, India (Bhopal Disaster).

Classifying use errors

The URM Model [14] characterizes use errors in terms of the user's failure to manage a system deficiency. Six categories of use errors are described in a URM document:

  1. Expected faults with risky results;
  2. Expected faults with unexpected results;
  3. Expected user errors in identifying risky situations (this study);
  4. User Errors in handling expected faults;
  5. Expected errors in function selection;
  6. Unexpected faults, due to operating in exceptional states.

Critics

Erik Hollnagel argues that going from and 'old' view to a 'new' view is not enough. One should go all the way to a 'no' view. This means that the notion of error, whether user error or use error might be destructive rather than constructive. Instead, he proposes to focus on the performance variability of everyday actions, on the basis that this performance variability is both useful and necessary. In most cases the result is that things go right, in a few cases that things go wrong. But the reason is the same. [15] Hollnagel expanded on this in his writings about the efficiency–thoroughness trade-off principle [16] of Resilience Engineering, [17] and the Resilient Health Care Net. [18]

Related Research Articles

Risk management Set of measures for the systematic identification, analysis, assessment, monitoring and control of risks

Risk management is the identification, evaluation, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events or to maximize the realization of opportunities.

Safety-critical system System whose failure or malfunction may result in death, injury or damage to equipment or the environment

A safety-critical system (SCS) or life-critical system is a system whose failure or malfunction may result in one of the following outcomes:

Usability Capacity of a system for its users to perform tasks

Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.

The Therac-25 was a computer-controlled radiation therapy machine produced by Atomic Energy of Canada Limited (AECL) in 1982 after the Therac-6 and Therac-20 units.

ISO 9241 is a multi-part standard from the International Organization for Standardization (ISO) covering ergonomics of human-computer interaction. It is managed by the ISO Technical Committee 159. It was originally titled Ergonomic requirements for office work with visual display terminals (VDTs). From 2006 on, the standards were retitled to the more generic Ergonomics of Human System Interaction.

FCAPS is the ISO Telecommunications Management Network model and framework for network management. FCAPS is an acronym for fault, configuration, accounting, performance, security, the management categories into which the ISO model defines network management tasks. In non-billing organizations accounting is sometimes replaced with administration.

In the context of software engineering, software quality refers to two related but distinct notions:

ISO/IEC 9126 Former ISO and IEC standard

ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.

Human reliability is related to the field of human factors and ergonomics, and refers to the reliability of humans in fields including manufacturing, medicine and nuclear power. Human performance can be affected by many factors such as age, state of mind, physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive biases, etc.

User interface design Planned operator–machine interaction

User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals.

A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different type of hazards. A hazard is a potential condition and exists or not. It may in single existence or in combination with other hazards and conditions become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the Risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, chemical process safety, safety engineering, reliability engineering and food safety.

Human error refers to something having been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards.

CEN ISO/IEEE 11073 Health informatics - Medical / health device communication standards enable communication between medical, health care and wellness devices and external computer systems. They provide automatic and detailed electronic data capture of client-related and vital signs information, and of device operational data.

Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".

In our modern society, computerized or digital control systems have been used to reliably automate many of the industrial operations that we take for granted, from the power plant to the automobiles we drive. However, the complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology, computer science, and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations. In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations.

ISO 14971Medical devices — Application of risk management to medical devices is an ISO standard for the application of risk management to medical devices. The ISO Technical Committee responsible for the maintenance of this standard is ISO TC 210 working with IEC/SC62A through Joint Working Group one (JWG1). This standard is the culmination of the work starting in ISO/IEC Guide 51, and ISO/IEC Guide 63. The latest significant revision was published in 2019. In 2013, a technical report ISO/TR 24971 was published by ISO TC 210 to provide expert guidance on the application of this standard.

The international standard IEC 62366 medical devices - Application of usability engineering to medical devices is a standard which specifies usability requirements for the development of medical devices. It is harmonized by the European Union (EU) and the United States (US), and therefore can be used as a benchmark to comply with regulatory requirements from both these markets.

Automation bias Propensity for humans to favor suggestions from automated decision-making systems

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.

References

  1. "The Issue is 'Use,' Not 'User,' Error". 10 July 2013.
  2. , Department of Defense Human Factors Analysis and Classification System: A mishap investigation and data analysis tool
  3. , Dekker: Reinvention of Human Error
  4. , Erik Hollnagel home page
  5. , Hollnagel: Why "Human Error" is a Meaningless Concept
  6. , Steve Casey: Set Phasers on Stun
  7. , Sidney Dekker: The Field Guide to Understanding Human Error
  8. , Weiler and Harel: Managing the Risks of Use Errors: The ITS Warning Systems Case Study
  9. , Dekker, 2007: The Field Guide to Understanding Human Error
  10. , Cook RI, Woods DD, Miller C [1998] A Tale of Two Stories: Contrasting Views of Patient Safety
  11. , FDA, Medical Devices, Glossary
  12. NISTIR 7804: Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records, Draft, Sept. 2011, pg. 10.
  13. Steve Casey, A memento of your service, in Set Phasers on Stun, 1998
  14. , Zonnenshain & Harel: Task-oriented SE, INCOSE 2009 Conference, Singapore
  15. Hollnagel: Understanding accidents-from root causes to performance variability
  16. , The ETTO Principle – Efficiency-Thoroughness Trade-Off
  17. Hollnagel, Paries, Woods, Wreathall (editors): Resilience engineering in practice
  18. the Resilient Health Care Net