Latent human error

Last updated

Latent human error is a term used in safety work and accident prevention, especially in aviation, to describe human errors which are likely to be made due to systems or routines that are formed in such a way that humans are disposed to making these errors. Latent human errors are frequently components in causes of accidents. The error is latent and may not materialize immediately, thus, latent human error does not cause immediate or obvious damage. Discovering latent errors is therefore difficult and requires a systematic approach. [1] Latent human error is often discussed in aviation incident investigation, and contributes to over 70% of the accidents. [2]

Contents

By gathering data about errors made, then collating, grouping and analyzing them, it can be determined whether a disproportionate amount of similar errors are being made. If this is the case, a contributing factor may be disharmony between the respective systems/routines and human nature or propensities. The routines or systems can then be analyzed, potential problems identified, and amendments made if necessary, in order to prevent future errors, incidents or accidents from occurring.

See also

Further reading


Citations


Related Research Articles

In aviation, a controlled flight into terrain is an accident in which an airworthy aircraft, fully under pilot control, is unintentionally flown into the ground, a mountain, a body of water or an obstacle. In a typical CFIT scenario, the crew is unaware of the impending collision until impact, or it is too late to avert. The term was coined by engineers at Boeing in the late 1970s.

In the field of human factors and ergonomics, human reliability is the probability that a human performs a task to a sufficient standard. Reliability of humans can be affected by many factors such as age, physical health, mental state, attitude, emotions, personal propensity for certain mistakes, and cognitive biases.

<span class="mw-page-title-main">Pilot error</span> Decision, action or inaction by a pilot of an aircraft

Pilot error generally refers to an accident in which an action or decision made by the pilot was the cause or a contributing factor that led to the accident, but also includes the pilot's failure to make a correct decision or take proper action. Errors are intentional actions that fail to achieve their intended outcomes. The Chicago Convention defines the term "accident" as "an occurrence associated with the operation of an aircraft [...] in which [...] a person is fatally or seriously injured [...] except when the injuries are [...] inflicted by other persons." Hence the definition of "pilot error" does not include deliberate crashing.

<span class="mw-page-title-main">Safety culture</span> Attitude, beliefs, perceptions and values that employees share in relation to risks in the workplace

Safety culture is the collection of the beliefs, perceptions and values that employees share in relation to risks within an organization, such as a workplace or community. Safety culture is a part of organizational culture, and has been described in a variety of ways, notably the National Academies of Science and the Association of Land Grant and Public Universities have published summaries on this topic in 2014 and 2016.

An incident is an event that could lead to loss of, or disruption to, an organization's operations, services or functions. Incident management (IcM) is a term describing the activities of an organization to identify, analyze, and correct hazards to prevent a future re-occurrence. These incidents within a structured organization are normally dealt with by either an incident response team (IRT), an incident management team (IMT), or Incident Command System (ICS). Without effective incident management, an incident can disrupt business operations, information security, IT systems, employees, customers, or other vital business functions.

Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

A system accident is an "unanticipated interaction of multiple failures" in a complex system. This complexity can either be of technology or of human organizations and is frequently both. A system accident can be easy to see in hindsight, but extremely difficult in foresight because there are simply too many action pathways to seriously consider all of them. Charles Perrow first developed these ideas in the mid-1980s. Safety systems themselves are sometimes the added complexity which leads to this type of accident.

<span class="mw-page-title-main">Swiss cheese model</span> Model used in risk analysis

The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of Swiss cheese, which has randomly placed and sized holes in each slice, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect".

The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards.

Fatigue is a major human factors issue in aviation safety. The Fatigue Avoidance Scheduling Tool (FAST) was developed by the United States Air Force in 2000–2001 to address the problem of aircrew fatigue in aircrew flight scheduling. FAST is a Windows program that allows scientists, planners and schedulers to quantify the effects of various work-rest schedules on human performance. It allows work and sleep data entry in graphic, symbolic (grid) and text formats. The graphic input-output display shows cognitive performance effectiveness as a function of time. An upper green area on the graph ends at the time for normal sleep, 90% effectiveness. The goal of the planner or scheduler is to keep performance effectiveness at or above 90% by manipulating the timing and lengths of work and rest periods. A work schedule is entered as red bands on the time line. Sleep periods are entered as blue bands across the time line, below the red bands.

Accident classification is a standardized method in accident analysis by which the causes of an accident, including the root causes, are grouped into categories. Accident classification is mainly used in aviation but can be expanded into other areas, such as railroad or health care. While accident reports are very detailed, the goal of accident classification is to look at a broader picture. By analysing a multitude of accidents and applying the same standardized classification scheme, patterns in how accidents develop can be detected and correlations can be built. The advantage of a standardized accident classification is that statistical methods can be used to gain more insight into accident causation.

Human factors are the physical or cognitive properties of individuals, or social behavior which is specific to humans, and influence functioning of technological systems as well as human-environment equilibria. The safety of underwater diving operations can be improved by reducing the frequency of human error and the consequences when it does occur. Human error can be defined as an individual's deviation from acceptable or desirable practice which culminates in undesirable or unexpected results.

Dive safety is primarily a function of four factors: the environment, equipment, individual diver performance and dive team performance. The water is a harsh and alien environment which can impose severe physical and psychological stress on a diver. The remaining factors must be controlled and coordinated so the diver can overcome the stresses imposed by the underwater environment and work safely. Diving equipment is crucial because it provides life support to the diver, but the majority of dive accidents are caused by individual diver panic and an associated degradation of the individual diver's performance. - M.A. Blumenberg, 1996

Maritime resource management (MRM) or bridge resource management (BRM) is a set of human factors and soft skills training aimed at the maritime industry. The MRM training programme was launched in 1993 – at that time under the name bridge resource management – and aims at preventing accidents at sea caused by human error.

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.

<span class="mw-page-title-main">Threat and error management</span> Safety management approach

In aviation safety, threat and error management (TEM) is an overarching safety management approach that assumes that pilots will naturally make mistakes and encounter risky situations during flight operations. Rather than try to avoid these threats and errors, its primary focus is on teaching pilots to manage these issues so they do not impair safety. Its goal is to maintain safety margins by training pilots and flight crews to detect and respond to events that are likely to cause damage (threats) as well as mistakes that are most likely to be made (errors) during flight operations.

Aviation accident analysis is performed to determine the cause of errors once an accident has happened. In the modern aviation industry, it is also used to analyze a database of past accidents in order to prevent an accident from happening. Many models have been used not only for the accident investigation but also for educational purpose.

NOTECHS is a system used to assess the non-technical skills of crew members in the aviation industry. Introduced in the late 1990s, the system has been widely used by airlines during crew selection process, picking out individuals who possess capable skills that are not directly related to aircraft controls or systems. In aviation, 70 percent of all accidents are induced from pilot error, lack of communication and decision making being two contributing factors to these accidents. NOTECHS assesses and provides feedback on the performance of pilots' social and cognitive skills to help minimize pilot error and enhance safety in the future. The NOTECHS system also aims to improve the Crew Resource Management training system.

<span class="mw-page-title-main">SHELL model</span>

In aviation, the SHELL model is a conceptual model of human factors that helps to clarify the location and cause of human error within an aviation environment.

Investigation of diving accidents includes investigations into the causes of reportable incidents in professional diving and recreational diving accidents, usually when there is a fatality or litigation for gross negligence.

Dr. Richard I. Cook was a system safety researcher, physician, anesthesiologist, university professor, and software engineer. Cook did research in safety, incident analysis, cognitive systems engineering, and resilience engineering across a number of fields, including critical care medicine, aviation, air traffic control, space operations, semiconductor manufacturing, and software services.

References

  1. Chiu, Ming-Chuan; Hsieh, Min-Chih (29 November 2015). "Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks". Applied Ergonomics. 54: 136–47. doi:10.1016/j.apergo.2015.11.017. PMID   26851473.
  2. Defense Technical Information Center (1994-12-01). DTIC ADA492127: Behind Human Error: Cognitive Systems, Computers and Hindsight.