Influence Diagrams Approach (IDA) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
An Influence diagram(ID) is essentially a graphical representation of the probabilistic interdependence between Performance Shaping Factors (PSFs), factors which pose a likelihood of influencing the success or failure of the performance of a task. The approach originates from the field of decision analysis and uses expert judgement in its formulations. It is dependent upon the principal of human reliability and results from the combination of factors such as organisational and individual factors, which in turn combine to provide an overall influence. There exists a chain of influences in which each successive level affects the next. The role of the ID is to depict these influences and the nature of the interrelationships in a more comprehensible format. In this way, the diagram may be used to represent the shared beliefs of a group of experts on the outcome of a particular action and the factors that may or may not influence that outcome. For each of the identified influences quantitative values are calculated, which are then used to derive final Human Error Probability (HEP) estimates.
IDA is a decision analysis based framework which is developed through eliciting expert judgement through group workshops. Unlike other first generation HRA, IDA explicitly considers the inter-dependency of operator and organisational PSFs. The IDA approach was first outlined by Howard and Matheson, [1] and then developed specifically for the nuclear industry by Embrey et al. [2].
The IDA methodology is conducted in a series of 10 steps as follows:
1. Describe all relevant conditioning events Experts who have sufficient knowledge of the situation under evaluation form a group; in depth knowledge is essential for the technique to be used to its optimal potential. The chosen individuals include a range of experts - typically those with first hand experience in the operational context under consideration – such as plant supervisors, reliability assessors, human factor specialists and designers. The group collectively assesses and gradually develops a representation of the most significant influences which will affect the success of the situation. The resultant diagram is useful in that it identifies both immediate and underlying influences of the considered factors with regards their effect on the situation under assessment and upon one another.
2. Refine the target event definition The event which is the basis of the assessment requires to be defined as tightly as possible.
3. Balance of Evidence The next stage is to select a middle-level event in the situation and using each of the bottom level influences, assess the weight of evidence, also known as the ‘balance of evidence’; this represents expert analysis of the likelihood that a specific state of influence or combination of the various influences is existent within the considered situation.
4. Assess the weight of evidence for this middle-level influence, which is conditional on bottom-level influences 5. Repeat 3 and 4 for the remaining middle-level and bottom-level influences These three steps are conducted in the aim of determining the extent to which the influences exist in the process, alone and in different combinations, and their conditional effects.
6. Assess probabilities of target event conditional on middle-level influences
7. Calculate the unconditional probability of target event and unconditional weight of evidence of middle-level influences For the various combinations of influences that have been considered, the experts identify direct estimates of the likelihood of either success or failure.
8. Compare these results to the holistic judgements of HEPs by the assessors. Revise if necessary to reduce discrepancies. At this stage the probabilities derived from the use of the technique are compared to holistic estimates from the experts, which have been derived through an Absolute probability judgement (APJ) process. Discrepancies are discussed and resolved within the group as required.
9. Repeat above steps until assessors are finished refining their judgements The above steps are iterated, in which all experts share opinions, highlight new aspects to the problem and revise the initially made assessments of the situation. The process is deemed complete when all participants reach a consensus that any misgivings about the discrepancies are resolved.
10. Perform sensitivity analyses If individual experts remain to be unsure of the discrepancies about the assessments which have been made, then sensitivity analysis can be used to determine the extent to which individual influence assessments affect the target event HEP. Conducting a cost-benefit analysis is also possible at this stage of the process.
The diagram below depicts an influence diagram which can be applied to any human reliability assessment [3].
This diagram was originally developed for use in the HRA of a scenario within the settings of a nuclear power situation. The diagram depicts the direct influences of each of the factors on the situation under consideration as well as providing as indication as to the way in which some of the factors affect each other.
There are 7 first level influences on the outcome of the high level task, numbered 1 to 7. Each of these describes an aspect of the task under assessment, which requires to be judged as one of two states.
Differing combinations of these first stage influences affect the state of those on the second level.
By assessing the state of the second level influences, the quality of information, organisation and personal factors, the overall likelihood of either success or failure of the task can be calculated by means of conditional probability calculations.
Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.
Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.
Task analysis is a fundamental tool of human factors engineering. It entails analyzing how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors involved in or required for one or more people to perform a given task.
In the field of human factors and ergonomics, human reliability is the probability that a human performs a task to a sufficient standard. Reliability of humans can be affected by many factors such as age, physical health, mental state, attitude, emotions, personal propensity for certain mistakes, and cognitive biases.
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity or the effects of stressors on the environment.
A hazard analysis is one of many methods that may be used to assess risk. At its core, the process entails describing a system object that intends to conduct some activity. During the performance of that activity, an adverse event may be encountered that could cause or contribute to an occurrence. Finally, that occurrence will result in some outcome that may be measured in terms of the degree of loss or harm. This outcome may be measured on a continuous scale, such as an amount of monetary loss, or the outcomes may be categorized into various levels of severity.
In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.
Absolute probability judgement is a technique used in the field of human reliability assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of 'fits/doesn't fit' in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. 'HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
Human Cognitive Reliability Correlation (HCR) is a technique used in the field of Human Reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
Tecnica Empirica Stima Errori Operatori (TESEO) is a technique in the field of Human reliability Assessment (HRA), that evaluates the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
The Technique for human error-rate prediction (THERP) is a technique that is used in the field of Human Reliability Assessment (HRA) to evaluate the probability of human error occurring throughout the completion of a task. From such an analysis, some corrective measures could be taken to reduce the likelihood of errors occurring within a system. The overall goal of THERP is to apply and document probabilistic methodological analyses to increase safety during a given process. THERP is used in fields such as error identification, error quantification and error reduction.
Human error assessment and reduction technique (HEART) is a technique used in the field of human reliability assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA: error identification, error quantification, and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications: first-generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of 'fits/doesn't fit' in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been used in a range of industries including healthcare, engineering, nuclear, transportation, and business sectors. Each technique has varying uses within different disciplines.
Success Likelihood Index Method (SLIM) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
A Technique for Human Event Analysis (ATHEANA) is a technique used in the field of human reliability assessment (HRA). The purpose of ATHEANA is to evaluate the probability of human error while performing a specific task. From such analyses, preventative measures can then be taken to reduce human errors within a system and therefore lead to improvements in the overall level of safety.
In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value, often focusing on negative, undesirable consequences. Many different definitions have been proposed. One international standard definition of risk is the "effect of uncertainty on objectives".
Risk management tools help address uncertainty by identifying risks, generating metrics, setting parameters, prioritizing issues, developing responses, and tracking risks. Without the use of these tools, techniques, documentation, and information systems, it can be challenging to effectively monitor these activities.
Human factors are the physical or cognitive properties of individuals, or social behavior which is specific to humans, and which influence functioning of technological systems as well as human-environment equilibria. The safety of underwater diving operations can be improved by reducing the frequency of human error and the consequences when it does occur. Human error can be defined as an individual's deviation from acceptable or desirable practice which culminates in undesirable or unexpected results. Human factors include both the non-technical skills that enhance safety and the non-technical factors that contribute to undesirable incidents that put the diver at risk.
[Safety is] An active, adaptive process which involves making sense of the task in the context of the environment to successfully achieve explicit and implied goals, with the expectation that no harm or damage will occur. – G. Lock, 2022
Dive safety is primarily a function of four factors: the environment, equipment, individual diver performance and dive team performance. The water is a harsh and alien environment which can impose severe physical and psychological stress on a diver. The remaining factors must be controlled and coordinated so the diver can overcome the stresses imposed by the underwater environment and work safely. Diving equipment is crucial because it provides life support to the diver, but the majority of dive accidents are caused by individual diver panic and an associated degradation of the individual diver's performance. – M.A. Blumenberg, 1996
ISO/IEC 31010 is a standard concerning risk management codified by The International Organization for Standardization and The International Electrotechnical Commission (IEC). The full name of the standard is ISO.IEC 31010:2019 – Risk management – Risk assessment techniques.
Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.
[2] EMBREY, D.E. & al, e. (1985) Appendix D: A Socio-Technical Approach to Assessing Human Reliability (STAHR) in Pressurized Thermal Shock Evaluation of the Calvert Cliffs Unit 1, Nuclear Power Plant. Research Report on DOE Contract 105840R21400, Selby, D. (Ed. Oak Ridge National Laboratory, Oak Ridge, TN.
[3] Humphreys, P. (1995). Human Reliability Assessor's Guide. Human Factors in Reliability Group.
[4] Ainsworth, L.K., & Kirwan, B. (1992). A Guide to Task Analysis. Taylor & Francis.