AcciMap approach

Last updated

The AcciMap approach is a systems-based technique for accident analysis, specifically for analysing the causes of accidents and incidents that occur in complex sociotechnical systems.

Contents

The approach was originally developed by Jens Rasmussen [1] as part of a proactive risk management strategy, but its primary application has been as an accident analysis tool.

Overview

The approach is not domain-specific and has been used to analyse accidents in a range of industries including aviation, [2] [3] defence, [4] oil and gas, [5] public health, [6] [7] risk management, policing, [8] and public health and rail transport. [9] The method is used to analyse the contributing factors of accidents at all levels of the system, and can also be utilised to formulate safety recommendations.

Features

The AcciMap approach is useful for uncovering how factors in the various parts of the system contributed to an accident, and for arranging those factors into a logical causal diagram that illustrates how they combined to result in that event. [10] The method also promotes a systemic view of accident causation as the AcciMap diagram extends well beyond the most immediate causes of the event to reveal the full range of higher-level factors that contributed to the outcome (or failed to prevent it from occurring). It therefore assists analysts in understanding how and why an accident took place, and prevents attention from focusing disproportionately on the immediate causes (such as errors made by front-line workers), because the factors that provoked or permitted those factors are also revealed. The approach therefore helps to avoid blaming frontline individuals for accidents and leaving the factors that contributed to their behaviour unaddressed. [3] In extending to consider contributing factors at governmental, regulatory and societal levels, the approach also has the capacity to capture and address high-level contributing factors that are typically excluded from accident analyses developed using other methods.

Structure

The AcciMap approach involves the construction of a multi-layered causal diagram in which the various causes of an accident are arranged according to their causal remoteness from the outcome (depicted at the bottom of the diagram). The most immediate causes are shown in the lower sections of the diagram, with more remote causes shown at progressively higher levels, so that the full range of factors that contributed to the event are modelled. [3]

The precise format of the diagram varies depending on the purpose of analysis, but the lower levels typically represent the immediate precursors to the event, relating to the activities of workers and to physical events, processes and conditions that contributed to the outcome. The next highest levels typically represent company and organisational-level factors. The highest levels generally incorporate governmental or societal-level causal factors, which are external to the organisation(s) involved in the event. Compiling the multiple contributing factors and their interrelationships into a single logical diagram in this way helps analysts understand how and why the event took place and pinpoints problem areas that can be addressed to improve system safety. [11]

Related Research Articles

Safety engineering Engineering discipline which assures that engineered systems provide acceptable levels of safety

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.

Fault tree analysis top-down, deductive failure analysis

Fault tree analysis (FTA) is a top-down, deductive failure analysis in which an undesired state of a system is analyzed using Boolean logic to combine a series of lower-level events. This analysis method is mainly used in the fields of safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk or to determine event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.

Sociotechnical systems (STS) in organizational development is an approach to complex organizational work design that recognizes the interaction between people and technology in workplaces. The term also refers to the interaction between society's complex infrastructures and human behaviour. In this sense, society itself, and most of its substructures, are complex sociotechnical systems. The term sociotechnical systems was coined by Eric Trist, Ken Bamforth and Fred Emery, in the World War II era, based on their work with workers in English coal mines at the Tavistock Institute in London.

In science and engineering, root cause analysis (RCA) is a method of problem solving used for identifying the root causes of faults or problems. It is widely used in IT operations, telecommunications, industrial process control, accident analysis, medicine, healthcare industry, etc.

Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

Reliability engineering is a sub-discipline of systems engineering that emphasizes dependability in the lifecycle management of a product. Reliability, describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity or the effects of stressors on the environment for example.

Safety culture The attitude, beliefs, perceptions and values that employees share in relation to risks in the workplace

Safety culture is the collection of the beliefs, perceptions and values that employees share in relation to risks within an organization, such as a workplace or community. Safety culture is a part of organizational culture, and has been described in a variety of ways; notably the National Academies of Science and the Association of Land Grant and Public Universities have published summaries on this topic in 2014 and 2016.

A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different type of hazards. A hazard is a potential condition and exists or not. It may in single existence or in combination with other hazards and conditions become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the Risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, chemical process safety, safety engineering, reliability engineering and food safety.

Human error refers to something having been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits".. Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

Accident analysis process determine the cause or causes of an accident or series of accidents so as to prevent further incidents of a similar kind

Accident analysis is carried out in order to determine the cause or causes of an accident so as to prevent further accidents of a similar kind. It is part of accident investigation or incident investigation. These analyses may be performed by a range of experts, including forensic scientists, forensic engineers or health and safety advisers. Accident investigators, particularly those in the aircraft industry, are colloquially known as "tin-kickers". Health and safety and patient safety professionals prefer using the term "incident" in place of the term "accident". Its retrospective nature means that accident analysis is primarily an exercise of directed explanation; conducted using the theories or methods the analyst has to hand, which directs the way in which the events, aspects, or features of accident phenomena are highlighted and explained.

A preventive action is a change implemented to address a weakness in a management system that is not yet responsible for causing nonconforming product or service.

Why–because analysis (WBA) is a method for accident analysis. It is independent of application domain and has been used to analyse, among others, aviation-, railway-, marine-, and computer-related accidents and incidents. It is mainly used as an after the fact analysis method. WBA strives to ensure objectivity, falsifiability and reproducibility of results.

Swiss cheese model

The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of swiss cheese, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by Dante Orlandella and James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect".

The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards.

Influence Diagrams Approach (IDA) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.

A Technique for Human Event Analysis (ATHEANA) is a technique used in the field of human reliability assessment (HRA). The purpose of ATHEANA is to evaluate the probability of human error while performing a specific task. From such analyses, preventative measures can then be taken to reduce human errors within a system and therefore lead to improvements in the overall level of safety.

Accident unforeseen and unplanned event or circumstance, often with a negative outcome

An accident is an unplanned event that sometimes has convenient or undesirable consequences, other times being inconsequential. The term implies that such an event may not be preventable since its antecedent circumstances go unrecognized and unaddressed. Most scientists who study unintentional injury avoid using the term "accident" and focus on factors that increase risk of severe injury and that reduce injury incidence and severity.

Risk is the potential for uncontrolled loss of something of value. Values can be gained or lost when taking risk resulting from a given action or inaction, foreseen or unforeseen. Risk can also be defined as the intentional interaction with uncertainty. Uncertainty is a potential, unpredictable, and uncontrollable outcome; risk is an aspect of action taken in spite of uncertainty.

Aviation accident analysis is performed to determine the cause of errors once an accident has happened. In the modern aviation industry, it is also used to analyze a database of past accidents in order to prevent an accident from happening. Many models have been used not only for the accident investigation but also for educational purpose.

References

  1. Rasmussen, Jens (1997). "Risk management in a dynamic society: A modelling problem". Safety Science. 27 (2–3): 183–213. doi:10.1016/S0925-7535(97)00052-0.
  2. Debrincat, J; Bil, C; Clark, G. (2013). "Assessing organisational factors in aircraft accidents using a hybrid Reason and AcciMap model" (PDF). Engineering Failure Analysis. 27: 52–60. doi:10.1016/j.engfailanal.2012.06.003.
  3. 1 2 3 Branford, Kate (2011). "Seeing the Big Picture of Mishaps" (PDF). Aviation Psychology and Applied Human Factors. 1 (1): 31–37. doi:10.1027/2192-0923/a00005.
  4. Royal Australian Air Force (2001). Chemical exposure of air force maintenance workers: Report of the Board of Inquiry into F111 (Fuel Tank) Deseal/Reseal spray seal programs. Canberra, Australia: Royal Australian Air Force.
  5. Hopkins, Andrew (2000). Lessons from Longford: The Esso Gas Plant explosion. Sydney: CCH Australia.
  6. Piche, A.C.; Vicente, K.J. (2005). "A sociotechnical systems analysis of the Toronto SARS outbreak". Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting. 49 (3): 507–511. doi:10.1177/154193120504900362.
  7. Woo, D.M.; Vicente, K.J. (2003). "Sociotechnical systems: Comparing the North Battleford and Walkerton outbreaks". Reliability Engineering & System Safety. 80 (3): 253–269. doi:10.1016/S0951-8320(03)00052-8.
  8. Jenkins, D.P.; Salmon, P.M.; Stanton, N.A.; Walker, G.H. (2010). "A systemic approach to accident analysis: A case study of the Stockwell shooting". Ergonomics. 53: 1–17. doi:10.1080/00140130903311625.
  9. Hopkins, Andrew (2005). Safety, culture and risk: The organisational causes of disasters. Sydney: CCH Australia.
  10. Branford, K; Naikar, N.; Hopkins, A. (2011). "Guidelines for AcciMap analysis". In A. Hopkins (Ed.) Learning from High Reliability Organisations: 193–212.
  11. Branford, Kate (2007). An investigation into the validity and reliability of the AcciMap approach (Doctoral Dissertation). Canberra, Australia: Australian National University.