5M model

Last updated
This diagram illustrates the nested/interlocking domains or factors that make up the 5M model used for troubleshooting and risk assessment, especially in traffic industries. Man, Machine, and Medium form three interlocking circles, with Mission at the intersection, and the space surrounding them representing the prevailing Management approach. The 5M model used for trouble-shooting.png
This diagram illustrates the nested/interlocking domains or factors that make up the 5M model used for troubleshooting and risk assessment, especially in traffic industries. Man, Machine, and Medium form three interlocking circles, with Mission at the intersection, and the space surrounding them representing the prevailing Management approach.

The 5M model is a troubleshooting and risk-management model used for aviation safety. [1] [2]

Contents

Original labels

Based on T.P. Wright's original work on the man-machine-environment triad [3] at Cornell University, the 5M model incorporates a diagram of 3 interlocking circles and one all-encompassing circle. The smaller circles are labeled Man, Machine, and Medium; the intersecting space in the middle, where they all meet, is labeled Mission; while the larger circle is labeled Management:

Expansion

These have been expanded by some to include an additional three, and are referred to as the 8 Ms: [4]

Other uses

This is also used in more general troubleshooting or root-cause analysis, such as with the Ishikawa diagram. [6]

Related Research Articles

<span class="mw-page-title-main">Systems engineering</span> Interdisciplinary field of engineering

Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.

<span class="mw-page-title-main">Safety engineering</span> Engineering discipline which assures that engineered systems provide acceptable levels of safety

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.

<span class="mw-page-title-main">Ishikawa diagram</span> Causal diagrams created by Kaoru Ishikawa

Ishikawa diagrams are causal diagrams created by Kaoru Ishikawa that show the potential causes of a specific event.

<span class="mw-page-title-main">Fault tree analysis</span> Failure analysis system used in safety engineering and reliability engineering

Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.

<span class="mw-page-title-main">Safety-critical system</span> System whose failure would be serious

A safety-critical system or life-critical system is a system whose failure or malfunction may result in one of the following outcomes:

In science and engineering, root cause analysis (RCA) is a method of problem solving used for identifying the root causes of faults or problems. It is widely used in IT operations, manufacturing, telecommunications, industrial process control, accident analysis, medicine, healthcare industry, etc. Root cause analysis is a form of inductive and deductive inference.

Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. An FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

Process Safety Managementof Highly Hazardous Chemicals is a regulation promulgated by the U.S. Occupational Safety and Health Administration (OSHA). It defines and regulates a process safety management (PSM) program for plants using, storing, manufacturing, handling or carrying out on-site movement of hazardous materials above defined amount thresholds. Companies affected by the regulation usually build a compliant process safety management system and integrate it in their safety management system. Non-U.S. companies frequently choose on a voluntary basis to use the OSHA scheme in their business.

Safety engineers focus on development and maintenance of the integrated management system. They act as a quality assurance and conformance specialist.

Human reliability is related to the field of human factors and ergonomics, and refers to the reliability of humans in fields including manufacturing, medicine and nuclear power. Human performance can be affected by many factors such as age, state of mind, physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive biases, etc.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different types of hazards. A hazard is a potential condition and exists or not. It may, in single existence or in combination with other hazards and conditions, become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, food safety, occupational safety and health, process safety, reliability engineering.

<span class="mw-page-title-main">ARP4761</span>

ARP4761, Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment is an Aerospace Recommended Practice from SAE International. In conjunction with ARP4754, ARP4761 is used to demonstrate compliance with 14 CFR 25.1309 in the U.S. Federal Aviation Administration (FAA) airworthiness regulations for transport category aircraft, and also harmonized international airworthiness regulations such as European Aviation Safety Agency (EASA) CS–25.1309.

Reliability-centered maintenance (RCM) is a concept of maintenance planning to ensure that systems continue to do what their user require in their present operating context. Successful implementation of RCM will lead to increase in cost effectiveness, reliability, machine uptime, and a greater understanding of the level of risk that the organization is managing.

A system accident is an "unanticipated interaction of multiple failures" in a complex system. This complexity can either be of technology or of human organizations and is frequently both. A system accident can be easy to see in hindsight, but extremely difficult in foresight because there are simply too many action pathways to seriously consider all of them. Charles Perrow first developed these ideas in the mid-1980s. Safety systems themselves are sometimes the added complexity which leads to this type of accident.

<span class="mw-page-title-main">Swiss cheese model</span> Model used in risk analysis

The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of Swiss cheese, which has randomly placed and sized holes in each slice, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by James T. Reason of the University of Manchester, and has since gained widespread acceptance. It is sometimes called the "cumulative act effect".

Process safety is an interdisciplinary engineering domain focusing on the study, prevention, and management of large-scale fires, explosions and chemical accidents in process plants or other facilities dealing with hazardous materials, such as refineries and oil and gas production installations. Thus, process safety is generally concerned with the prevention of, control of, mitigation of and recovery from unintentional hazardous materials releases that can have a serious effect to people, plant and/or the environment.

An event tree is an inductive analytical diagram in which an event is analyzed using Boolean logic to examine a chronological series of subsequent events or consequences. For example, event tree analysis is a major component of nuclear reactor safety engineering.

<span class="mw-page-title-main">Accident</span> Unforeseen event, often with a negative outcome

An accident is an unintended, normally unwanted event that was not directly caused by humans. The term accident implies that nobody should be blamed, but the event may have been caused by unrecognized or unaddressed risks. Most researchers who study unintentional injury avoid using the term accident and focus on factors that increase risk of severe injury and that reduce injury incidence and severity. For example, when a tree falls down during a wind storm, its fall may not have been caused by humans, but the tree's type, size, health, location, or improper maintenance may have contributed to the result. Most car wrecks are not true accidents; however English speakers started using that word in the mid-20th century as a result of media manipulation by the US automobile industry.

The AcciMap approach is a systems-based technique for accident analysis, specifically for analysing the causes of accidents and incidents that occur in complex sociotechnical systems.

References

  1. Cusick, Stephen K.; Wells, Alexander T. (2012). Commercial aviation safety. McGraw-Hill Professional. ISBN   978-0-07-176305-9. OCLC   707964144.
  2. Ballesteros, J.S.A. (2016). Improving Air Safety Through Organizational Learning: Consequences of a Technology-led Model. Routledge. ISBN   978-1-317-11824-4.
  3. Stolzer, A.J.; Goglia, J.J. (3 March 2016). Safety management systems in aviation. ISBN   978-1-317-05983-7. OCLC   944186147.
  4. Bradley, Edgar (3 November 2016). Reliability engineering: a life cycle approach. ISBN   978-1-4987-6537-4. OCLC   963184495.
  5. Weeden, Marcia M. (1952). Failure mode and effects analysis (FMEAs) for small business owners and non-engineers: determining and preventing what can go wrong. ISBN   0-87389-918-0. OCLC   921141300.
  6. Crutchfield, Nathan (2008). Job hazard analysis : a guide for voluntary compliance and beyond : from hazard to risk: transforming the JHA from a tool to a process. Elsevier/Butterworth-Heinemann. ISBN   978-0-08-055416-7. OCLC   182759248.

See also