Human Factors Analysis and Classification System

Last updated

The Human Factors Analysis and Classification System (HFACS) identifies the human causes of an accident and offers tools for analysis as a way to plan preventive training. [1] It was developed by Dr. Scott Shappell of the Civil Aviation Medical Institute and Dr. Doug Wiegmann of the University of Illinois at Urbana-Campaign in response to a trend that showed some form of human error was a primary causal factor in 80% of all flight accidents in the Navy and Marine Corps. [1]

HFACS is based in the "Swiss cheese" model of human error [2] which looks at four levels of human failure, including unsafe acts, preconditions for unsafe acts, unsafe supervision, and organizational influences. [1] It is a comprehensive human error framework that folded James Reason's ideas into the applied setting, defining 19 causal categories within four levels of human failure. [3]

Swiss cheese model of accident causation Swiss cheese model of accident causation with additional labels.png
Swiss cheese model of accident causation


See also

Related Research Articles

In science and engineering, root cause analysis (RCA) is a method of problem solving used for identifying the root causes of faults or problems. It is widely used in IT operations, manufacturing, telecommunications, industrial process control, accident analysis, medicine, healthcare industry, etc. Root cause analysis is a form of inductive and deductive inference.

In the field of human factors and ergonomics, human reliability is the probability that a human performs a task to a sufficient standard. Reliability of humans can be affected by many factors such as age, physical health, mental state, attitude, emotions, personal propensity for certain mistakes, and cognitive biases.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

<span class="mw-page-title-main">Pilot error</span> Decision, action or inaction by a pilot of an aircraft

Pilot error generally refers to an accident in which an action or decision made by the pilot was the cause or a contributing factor that led to the accident, but also includes the pilot's failure to make a correct decision or take proper action. Errors are intentional actions that fail to achieve their intended outcomes. The Chicago Convention defines the term "accident" as "an occurrence associated with the operation of an aircraft [...] in which [...] a person is fatally or seriously injured [...] except when the injuries are [...] inflicted by other persons." Hence the definition of "pilot error" does not include deliberate crashing.

<span class="mw-page-title-main">Safety culture</span> Attitude, beliefs, perceptions and values that employees share in relation to risks in the workplace

Safety culture is the collection of the beliefs, perceptions and values that employees share in relation to risks within an organization, such as a workplace or community. Safety culture is a part of organizational culture, and has been described in a variety of ways, notably the National Academies of Science and the Association of Land Grant and Public Universities have published summaries on this topic in 2014 and 2016.

Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

<span class="mw-page-title-main">Accident analysis</span> Process to determine the causes of accidents to prevent recurrence

Accident analysis is a process carried out in order to determine the cause or causes of an accident so as to prevent further accidents of a similar kind. It is part of accident investigation or incident investigation. These analyses may be performed by a range of experts, including forensic scientists, forensic engineers or health and safety advisers. Accident investigators, particularly those in the aircraft industry, are colloquially known as "tin-kickers". Health and safety and patient safety professionals prefer using the term "incident" in place of the term "accident". Its retrospective nature means that accident analysis is primarily an exercise of directed explanation; conducted using the theories or methods the analyst has to hand, which directs the way in which the events, aspects, or features of accident phenomena are highlighted and explained. These analyses are also invaluable in determining ways to prevent future incidents from occurring. They provide good insight by determining root causes, into what failures occurred that lead to the incident.

<span class="mw-page-title-main">Swiss cheese model</span> Model used in risk analysis

The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of Swiss cheese, which has randomly placed and sized holes in each slice, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect".

The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards.

The Fire Fighter Near Miss Reporting System was launched on August 12, 2005 by the International Association of Fire Chiefs. It was announced at a press conference in Denver, Colorado, after having completed a pilot program involving 38 fire departments across the country. The Near Miss Reporting System aims to prevent injuries and save lives of other firefighters by collecting, sharing and analyzing near-miss experiences. The near-miss experiences are collected by firefighters who voluntarily submit them; the reports are confidential, non-punitive, and secure. After the reports are compiled, they are posted to the website where firefighters can access them and learn from each other's real-life experiences. Overall these reports help to formulate strategies, reduce firefighter injuries and fatalities, and enhance the safety culture of the fire service. The program is based on the Aviation Safety Reporting System (ASRS), which has been gathering reports of close calls from pilots, flight attendants, air traffic controllers since 1976. The reporting system is funded by the International Association of Fire Chiefs.

Single-pilot resource management (SRM) is defined as the art and science of managing all the resources available to a single-pilot to ensure that the successful outcome of the flight is never in doubt. SRM includes the concepts of Aeronautical Decision Making (ADM), Risk Management (RM), Task Management (TM), Automation Management (AM), Controlled Flight Into Terrain (CFIT) Awareness, and Situational Awareness (SA). SRM training helps the pilot maintain situational awareness by managing the automation and associated aircraft control and navigation tasks. This enables the pilot to accurately assess and manage risk and make accurate and timely decisions.

A Technique for Human Event Analysis (ATHEANA) is a technique used in the field of human reliability assessment (HRA). The purpose of ATHEANA is to evaluate the probability of human error while performing a specific task. From such analyses, preventative measures can then be taken to reduce human errors within a system and therefore lead to improvements in the overall level of safety.

The healthcare error proliferation model is an adaptation of James Reason’s Swiss Cheese Model designed to illustrate the complexity inherent in the contemporary healthcare delivery system and the attribution of human error within these systems. The healthcare error proliferation model explains the etiology of error and the sequence of events typically leading to adverse outcomes. This model emphasizes the role organizational and external cultures contribute to error identification, prevention, mitigation, and defense construction.

Accident classification is a standardized method in accident analysis by which the causes of an accident, including the root causes, are grouped into categories. Accident classification is mainly used in aviation but can be expanded into other areas, such as railroad or health care. While accident reports are very detailed, the goal of accident classification is to look at a broader picture. By analysing a multitude of accidents and applying the same standardized classification scheme, patterns in how accidents develop can be detected and correlations can be built. The advantage of a standardized accident classification is that statistical methods can be used to gain more insight into accident causation.

The term use error has recently been introduced to replace the commonly used terms human error and user error. The new term, which has already been adopted by international standards organizations for medical devices, suggests that accidents should be attributed to the circumstances, rather than to the human beings who happened to be there.

Human factors are the physical or cognitive properties of individuals, or social behavior which is specific to humans, and influence functioning of technological systems as well as human-environment equilibria. The safety of underwater diving operations can be improved by reducing the frequency of human error and the consequences when it does occur. Human error can be defined as an individual's deviation from acceptable or desirable practice which culminates in undesirable or unexpected results.

Dive safety is primarily a function of four factors: the environment, equipment, individual diver performance and dive team performance. The water is a harsh and alien environment which can impose severe physical and psychological stress on a diver. The remaining factors must be controlled and coordinated so the diver can overcome the stresses imposed by the underwater environment and work safely. Diving equipment is crucial because it provides life support to the diver, but the majority of dive accidents are caused by individual diver panic and an associated degradation of the individual diver's performance. - M.A. Blumenberg, 1996

<span class="mw-page-title-main">Astronaut organization in spaceflight missions</span>

Selection, training, cohesion and psychosocial adaptation influence performance and, as such, are relevant factors to consider while preparing for costly, long-duration spaceflight missions in which the performance objectives will be demanding, endurance will be tested and success will be critical.

The AcciMap approach is a systems-based technique for accident analysis, specifically for analysing the causes of accidents and incidents that occur in complex sociotechnical systems.

Aviation accident analysis is performed to determine the cause of errors once an accident has happened. In the modern aviation industry, it is also used to analyze a database of past accidents in order to prevent an accident from happening. Many models have been used not only for the accident investigation but also for educational purpose.

<span class="mw-page-title-main">SHELL model</span>

In aviation, the SHELL model is a conceptual model of human factors that helps to clarify the location and cause of human error within an aviation environment.

References

  1. 1 2 3 "The Human Factors Analysis and Classification System (HFACS)," Approach, July - August 2004. Accessed July 12, 2007. Archived February 8, 2007, at the Wayback Machine
  2. Reason, J. [1990] Human Error. Cambridge University Press
  3. HFACS Analysis of Military and Civilian Aviation Accidents: A North American Comparison. ISASI, 2004