Automation bias

Last updated

Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. [1] Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. [2] The same type of positivity bias has been found for human-automation interaction, [3] where the automated decisions are rated more positively than neutral. [4] This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs. [5]

Contents

Disuse and misuse

An operator's trust in the system can also lead to different interactions with the system, including system use, misuse, disuse, and abuse. [6] [ vague ]

The tendency toward overreliance on automated aids is known as "automation misuse". [6] [7] Misuse of automation can be seen when a user fails to properly monitor an automated system, or when the automated system is used when it should not be. This is in contrast to disuse, where the user does not properly utilize the automation either by turning it off or ignoring it. Both misuse and disuse can be problematic, but automation bias is directly related to misuse of the automation through either too much trust in the abilities of the system, or defaulting to using heuristics. Misuse can lead to lack of monitoring of the automated system or blind agreement with an automation suggestion, categorized by two types of errors, errors of omission and errors of commission, respectively. [8] [9] [6]

Automation use and disuse can also influence stages of information processing: information acquisition, information analysis, decision making and action selection, and action implementation. [10]

For example, information acquisition, the first step in information processing, is the process by which a user registers input via the senses. [10] An automated engine gauge might assist the user with information acquisition through simple interface features—such as highlighting changes in the engine's performance—thereby directing the user's selective attention. When faced with issues originating from an aircraft, pilots may tend to overtrust an aircraft's engine gauges, losing sight of other possible malfunctions not related to the engine. This attitude is a form of automation complacency and misuse. If, however, the pilot devotes time to interpret the engine gauge, and manipulate the aircraft accordingly, only to discover that the flight turbulence has not changed, the pilot may be inclined to ignore future error recommendations conveyed by an engine gauge—a form of automation complacency leading to disuse.

Errors of commission and omission

Automation bias can take the form of commission errors, which occur when users follow an automated directive without taking into account other sources of information. Conversely, omission errors occur when automated devices fail to detect or indicate problems and the user does not notice because they are not properly monitoring the system. [11]

Errors of omission have been shown to result from cognitive vigilance decrements, while errors of commission result from a combination of a failure to take information into account and an excessive trust in the reliability of automated aids. [5] Errors of commission occur for three reasons: (1) overt redirection of attention away from the automated aid; (2) diminished attention to the aid; (3) active discounting of information that counters the aid's recommendations. [12] Omission errors occur when the human decision-maker fails to notice an automation failure, either due to low vigilance or overtrust in the system. [5] For example, a spell-checking program incorrectly marking a word as misspelled and suggesting an alternative would be an error of commission, and a spell-checking program failing to notice a misspelled word would be an error of omission. In these cases, automation bias could be observed by a user accepting the alternative word without consulting a dictionary, or a user not noticing the incorrectly misspelled word and assuming all the words are correct without reviewing the words.

Training that focused on the reduction of automation bias and related problems has been shown to lower the rate of commission errors, but not of omission errors. [5]

Factors

The presence of automatic aids, as one source puts it, "diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways." It also renders users more likely to conclude their assessment of a situation too hastily after being prompted by an automatic aid to take a specific course of action. [7]

According to one source, there are three main factors that lead to automation bias. First, the human tendency to choose the least cognitive approach to decision-making, which is called the cognitive miser hypothesis. Second, the tendency of humans to view automated aids as having an analytical ability superior to their own. Third, the tendency of humans to reduce their own effort when sharing tasks, either with another person or with an automated aid. [12]

Other factors leading to an over-reliance on automation and thus to automation bias include inexperience in a task (though inexperienced users tend to be most benefited by automated decision support systems), lack of confidence in one's own abilities, a lack of readily available alternative information, or desire to save time and effort on complex tasks or high workloads. [13] [14] [15] [8] It has been shown that people who have greater confidence in their own decision-making abilities tend to be less reliant on external automated support, while those with more trust in decision support systems (DSS) were more dependent upon it. [13]

Screen design

One study, published in the Journal of the American Medical Informatics Association , found that the position and prominence of advice on a screen can impact the likelihood of automation bias, with prominently displayed advice, correct or not, is more likely to be followed; another study, however, seemed to discount the importance of this factor. [13] According to another study, a greater amount of on-screen detail can make users less "conservative" and thus increase the likelihood of automation bias. [13] One study showed that making individuals accountable for their performance or the accuracy of their decisions reduced automation bias. [5]

Availability

"The availability of automated decision aids," states one study by Linda Skitka, "can sometimes feed into the general human tendency to travel the road of least cognitive effort." [5]

Awareness of process

One study also found that when users are made aware of the reasoning process employed by a decision support system, they are likely to adjust their reliance accordingly, thus reducing automation bias. [13]

Team vs. individual

The performance of jobs by crews instead of individuals acting alone does not necessarily eliminate automation bias. [16] [11] One study has shown that when automated devices failed to detect system irregularities, teams were no more successful than solo performers at responding to those irregularities. [5]

Training

Training that focuses on automation bias in aviation has succeeded in reducing omission errors by student pilots. [16] [11]

Automation failure and "learned carelessness"

It has been shown that automation failure is followed by a drop in operator trust, which in turn is succeeded by a slow recovery of trust. The decline in trust after an initial automation failure has been described as the first-failure effect. [12] By the same token, if automated aids prove to be highly reliable over time, the result is likely to be a heightened level of automation bias. This is called "learned carelessness." [12]

Provision of system confidence information

In cases where system confidence information is provided to users, that information itself can become a factor in automation bias. [12]

External pressures

Studies have shown that the more external pressures are exerted on an individual's cognitive capacity, the more he or she may rely on external support. [13]

Definitional problems

Although automation bias has been the subject of many studies, there continue to be complaints that automation bias remains ill-defined and that reporting of incidents involving automation bias is unsystematic. [13] [8]

A review of various automation bias studies categorized the different types of tasks where automated aids were used as well as what function the automated aids served. Tasks where automated aids were used were categorized as monitoring tasks, diagnosis tasks, or treatment tasks. Types of automated assistance were listed as Alerting automation, which track important changes and alert the user, Decision support automation, which may provide a diagnosis or recommendation, or Implementation automation, where the automated aid performs a specified task. [8]

Automation-induced complacency

The concept of automation bias is viewed as overlapping with automation-induced complacency, also known more simply as automation complacency. Like automation bias, it is a consequence of the misuse of automation and involves problems of attention. While automation bias involves a tendency to trust decision-support systems, automation complacency involves insufficient attention to and monitoring of automation output, usually because that output is viewed as reliable. [13] "Although the concepts of complacency and automation bias have been discussed separately as if they were independent," writes one expert, "they share several commonalities, suggesting they reflect different aspects of the same kind of automation misuse." It has been proposed, indeed, that the concepts of complacency and automation bias be combined into a single "integrative concept" because these two concepts "might represent different manifestations of overlapping automation-induced phenomena" and because "automation-induced complacency and automation bias represent closely linked theoretical concepts that show considerable overlap with respect to the underlying processes." [12]

Automation complacency has been defined as "poorer detection of system malfunctions under automation compared with under manual control." NASA's Aviation Safety Reporting System (ASRS) defines complacency as "self-satisfaction that may result in non-vigilance based on an unjustified assumption of satisfactory system state." Several studies have indicated that it occurs most often when operators are engaged in both manual and automated tasks at the same time. In turn, the operators' perceptions of the automated system's reliability can influence the way in which the operator interacts with the system. Endsley (2017) describes how high system reliability can lead users to disengage from monitoring systems, thereby increasing monitoring errors, decreasing situational awareness, and interfering with an operator's ability to re-assume control of the system in the event performance limitations have been exceeded. [17] This complacency can be sharply reduced when automation reliability varies over time instead of remaining constant, but is not reduced by experience and practice. Both expert and inexpert participants can exhibit automation bias as well as automation complacency. Neither of these problems can be easily overcome by training. [12]

The term "automation complacency" was first used in connection with aviation accidents or incidents in which pilots, air-traffic controllers, or other workers failed to check systems sufficiently, assuming that everything was fine when, in reality, an accident was about to occur. Operator complacency, whether or not automation-related, has long been recognized as a leading factor in air accidents. [12]

As such, perceptions of reliability, in general, can result in a form of automation irony, in which more automation can decrease cognitive workload but increase the opportunity for monitoring errors. In contrast, low automation can increase workload but decrease the opportunity for monitoring errors. [18] Take, for example, a pilot flying through inclement weather, in which continuous thunder interferes with the pilot's ability to understand information transmitted by an air traffic controller (ATC). Despite how much effort is allocated to understanding information transmitted by ATC, the pilot's performance is limited by the source of information needed for the task. The pilot therefore has to rely on automated gauges in the cockpit to understand flight path information. If the pilot perceives the automated gauges to be highly reliable, the amount of effort needed to understand ATC and automated gauges may decrease. Moreover, if the automated gauges are perceived to be highly reliable, the pilot may ignore those gauges to devote mental resources for deciphering information transmitted by ATC. In so doing, the pilot becomes a complacent monitor, thereby running the risk of missing critical information conveyed by the automated gauges. If, however, the pilot perceives the automated gauges to be unreliable, the pilot will now have to interpret information from ATC and automated gauges simultaneously. This creates scenarios in which the operator may be expending unnecessary cognitive resources when the automation is in fact reliable, but also increasing the odds of identifying potential errors in the weather gauges should they occur. To calibrate the pilot's perception of reliability, automation should be designed to maintain workload at appropriate levels while also ensuring the operator remains engaged with monitoring tasks. The operator should be less likely to disengage from monitoring when the system's reliability can change as compared to a system that has consistent reliability (Parasuraman, 1993). [19]

To some degree, user complacency offsets the benefits of automation, and when an automated system's reliability level falls below a certain level, then automation will no longer be a net asset. One 2007 study suggested that this automation occurs when the reliability level reaches approximately 70%. Other studies have found that automation with a reliability level below 70% can be of use to persons with access to the raw information sources, which can be combined with the automation output to improve performance. [12]

Death by GPS, wherein the deaths of individuals is in part caused by following inaccurate GPS directions, is another example of automation complacency.

Sectors

Automation bias has been examined across many research fields. [13] It can be a particularly major concern in aviation, medicine, process control, and military command-and-control operations. [12]

Aviation

At first, discussion of automation bias focused largely on aviation. Automated aids have played an increasing role in cockpits, taking a growing role in the control of such flight tasks as determining the most fuel-efficient routes, navigating, and detecting and diagnosing system malfunctions. The use of these aids, however, can lead to less attentive and less vigilant information seeking and processing on the part of human beings. In some cases, human beings may place more confidence in the misinformation provided by flight computers than in their own skills. [7]

An important factor in aviation-related automation bias is the degree to which pilots perceive themselves as responsible for the tasks being carried out by automated aids. One study of pilots showed that the presence of a second crewmember in the cockpit did not affect automation bias. A 1994 study compared the impact of low and high levels of automation (LOA) on pilot performance, and concluded that pilots working with a high level spent less time reflecting independently on flight decisions. [12]

In another study, all of the pilots given false automated alerts that instructed them to shut off an engine did so, even though those same pilots insisted in an interview that they would not respond to such an alert by shutting down an engine, and would instead have reduced the power to idle. One 1998 study found that pilots with approximately 440 hours of flight experience detected more automation failures than did nonpilots, although both groups showed complacency effects. A 2001 study of pilots using a cockpit automation system, the Engine-indicating and crew-alerting system (EICAS), showed evidence of complacency. The pilots detected fewer engine malfunctions when using the system than when performing the task manually. [12]

In a 2005 study, experienced air-traffic controllers used high-fidelity simulation of an ATC (Free Flight) scenario that involved the detection of conflicts among "self-separating" aircraft. They had access to an automated device that identified potential conflicts several minutes ahead of time. When the device failed near the end of the simulation process, considerably fewer controllers detected the conflict than when the situation was handled manually. Other studies have produced similar findings. [12]

Two studies of automation bias in aviation discovered a higher rate of commission errors than omission errors, while another aviation study found 55% omission rates and 0% commission rates. [13] Automation-related omissions errors are especially common during the cruise phase. When a China Airlines flight lost power in one engine, the autopilot attempted to correct for this problem by lowering the left wing, an action that hid the problem from the crew. When the autopilot was disengaged, the airplane rolled to the right and descended steeply, causing extensive damage. The 1983 shooting down of a Korean Airlines 747 over Soviet airspace occurred because the Korean crew "relied on automation that had been inappropriately set up, and they never checked their progress manually." [7]

Health care

Clinical decision support systems (CDSS) are designed to aid clinical decision-making. They have the potential to effect a great improvement in this regard, and to result in improved patient outcomes. Yet while CDSS, when used properly, bring about an overall improvement in performance, they also cause errors that may not be recognized owing to automation bias. One danger is that the incorrect advice given by these systems may cause users to change a correct decision that they have made on their own. Given the highly serious nature of some of the potential consequences of automation bias in the health-care field, it is especially important to be aware of this problem when it occurs in clinical settings. [13]

Sometimes automation bias in clinical settings is a major problem that renders CDSS, on balance, counterproductive; sometimes it is minor problem, with the benefits outweighing the damage done. One study found more automation bias among older users, but it was noted that could be a result not of age but of experience. Studies suggest, indeed, that familiarity with CDSS often leads to desensitization and habituation effects. Although automation bias occurs more often among persons who are inexperienced in a given task, inexperienced users exhibit the most performance improvement when they use CDSS. In one study, the use of CDSS improved clinicians' answers by 21%, from 29% to 50%, with 7% of correct non-CDSS answers being changed incorrectly. [13]

A 2005 study found that when primary-care physicians used electronic sources such as PubMed, Medline, and Google, there was a "small to medium" increase in correct answers, while in an equally small percentage of instances the physicians were misled by their use of those sources, and changed correct to incorrect answers. [12]

Studies in 2004 and 2008 that involved the effect of automated aids on diagnosis of breast cancer found clear evidence of automation bias involving omission errors. Cancers diagnosed in 46% of cases without automated aids were discovered in only 21% of cases with automated aids that failed to identify the cancer. [12]

Military

Automation bias can be a crucial factor in the use of intelligent decision support systems for military command-and-control operations. One 2004 study found that automation bias effects have contributed to a number of fatal military decisions, including friendly-fire killings during the Iraq War. Researchers have sought to determine the proper level of automation for decision support systems in this field. [12]

Automotive

Automation complacency is also a challenge for automated driving systems in which the human only has to monitor the system or act as a fallback driver. This is for example discussed in the report of National Transportation Safety Board about the fatal accident between an UBER test vehicle and pedestrian Elaine Herzberg. [20]

Correction

Automation bias can be mitigated by redesigning automated systems to reduce display prominence, decrease information complexity or couch assistance as supportive rather than directive information. [13] Training users on automated systems which introduce deliberate errors more effectively reduces automation bias than just telling them errors can occur. [21] Excessively checking and questioning automated assistance can increase time pressure and task complexity, thus reducing benefit, so some automated decision support systems balance positive and negative effects rather than attempt to eliminate negative effects. [14]

See also

Related Research Articles

<span class="mw-page-title-main">Automation</span> Use of various control systems for operating equipment

Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.

Status quo bias is an emotional bias; a preference for the maintenance of one's current or previous state of affairs, or a preference to not undertake any action to change this current or previous state. The current baseline is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.

In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

Neville A. Stanton is a British Professor Emeritus of Human Factors and Ergonomics at the University of Southampton. He is a Chartered Engineer (C.Eng), Chartered Psychologist (C.Psychol) and Chartered Ergonomist (C.ErgHF) and has written and edited over sixty books and over four hundred peer-reviewed journal papers on applications of the subject. Stanton is a Fellow of the British Psychological Society, a Fellow of The Institute of Ergonomics and Human Factors and a member of the Institution of Engineering and Technology. He has been published in academic journals including Nature. He has also helped organisations design new human-machine interfaces, such as the Adaptive Cruise Control system for Jaguar Cars.

Task analysis is a fundamental tool of human factors engineering. It entails analyzing how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors involved in or required for one or more people to perform a given task.

In the field of human factors and ergonomics, human reliability is the probability that a human performs a task to a sufficient standard. Reliability of humans can be affected by many factors such as age, physical health, mental state, attitude, emotions, personal propensity for certain mistakes, and cognitive biases.

<span class="mw-page-title-main">Pilot error</span> Decision, action or inaction by a pilot of an aircraft

Pilot error generally refers to an accident in which an action or decision made by the pilot was the cause or a contributing factor that led to the accident, but also includes the pilot's failure to make a correct decision or take proper action. Errors are intentional actions that fail to achieve their intended outcomes. The Chicago Convention defines the term "accident" as "an occurrence associated with the operation of an aircraft [...] in which [...] a person is fatally or seriously injured [...] except when the injuries are [...] inflicted by other persons." Hence the definition of "pilot error" does not include deliberate crashing.

Situational awareness or situation awareness (SA) is the understanding of an environment, its elements, and how it changes with respect to time or other factors. Situational awareness is important for effective decision making in many environments. It is formally defined as:

“the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”.

Ecological interface design (EID) is an approach to interface design that was introduced specifically for complex sociotechnical, real-time, and dynamic systems. It has been applied in a variety of domains including process control, aviation, and medicine.

Cognitive ergonomics is a scientific discipline that studies, evaluates, and designs tasks, jobs, products, environments and systems and how they interact with humans and their cognitive abilities. It is defined by the International Ergonomics Association as "concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. Cognitive ergonomics is responsible for how work is done in the mind, meaning, the quality of work is dependent on the persons understanding of situations. Situations could include the goals, means, and constraints of work. The relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system design." Cognitive ergonomics studies cognition in work and operational settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics.

Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.

Neuroergonomics is the application of neuroscience to ergonomics. Traditional ergonomic studies rely predominantly on psychological explanations to address human factors issues such as: work performance, operational safety, and workplace-related risks. Neuroergonomics, in contrast, addresses the biological substrates of ergonomic concerns, with an emphasis on the role of the human nervous system.

<span class="mw-page-title-main">Linda Skitka</span> Psychologist

Linda J. Skitka is a professor of psychology at the University of Illinois at Chicago. Skitka's research bridges a number of areas of inquiry including social, political, and moral psychology.

Cognitive bias mitigation is the prevention and reduction of the negative effects of cognitive biases – unconscious, automatic influences on human judgment and decision making that reliably produce reasoning errors.

Human performance modeling (HPM) is a method of quantifying human behavior, cognition, and processes. It is a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction. It is a complementary approach to other usability testing methods for evaluating the impact of interface features on operator performance.

Pilot decision making, also known as aeronautical decision making (ADM), is a process that aviators perform to effectively handle troublesome situations that are encountered. Pilot decision-making is applied in almost every stage of the flight as it considers weather, air spaces, airport conditions, estimated time of arrival and so forth. During the flight, employers pressure pilots regarding time and fuel restrictions since a pilots’ performance directly affects the company’s revenue and brand image. This pressure often hinders a pilot's decision-making process leading to dangerous situations as 50% to 90% of aviation accidents are the result of pilot error.

<span class="mw-page-title-main">Stress in the aviation industry</span> Pilots wellbeing whilst working

Stress in the aviation industry is a common phenomenon composed of three sources: physiological stressors, psychological stressors, and environmental stressors. Professional pilots can experience stress in flight, on the ground during work-related activities, and during personal time because of the influence of their occupation. An airline pilot can be an extremely stressful job due to the workload, responsibilities and safety of the thousands of passengers they transport around the world. Chronic levels of stress can negatively impact one's health, job performance and cognitive functioning. Being exposed to stress does not always negatively influence humans because it can motivate people to improve and help them adapt to a new environment. Unfortunate accidents start to occur when a pilot is under excessive stress, as it dramatically affects his or her physical, emotional, and mental conditions. Stress "jeopardizes decision-making relevance and cognitive functioning" and it is a prominent cause of pilot error. Being a pilot is considered a unique job that requires managing high workloads and good psychological and physical health. Unlike the other professional jobs, pilots are considered to be highly affected by stress levels. One study states that 70% of surgeons agreed that stress and fatigue don't impact their performance level, while only 26% of pilots denied that stress influences their performance. Pilots themselves realize how powerful stress can be, and yet many accidents and incidents continues to occur and have occurred, such as Asiana Airlines Flight 214, American Airlines Flight 1420, and Polish Air Force Tu-154.

NOTECHS is a system used to assess the non-technical skills of crew members in the aviation industry. Introduced in the late 1990s, the system has been widely used by airlines during crew selection process, picking out individuals who possess capable skills that are not directly related to aircraft controls or systems. In aviation, 70 percent of all accidents are induced from pilot error, lack of communication and decision making being two contributing factors to these accidents. NOTECHS assesses and provides feedback on the performance of pilots' social and cognitive skills to help minimize pilot error and enhance safety in the future. The NOTECHS system also aims to improve the Crew Resource Management training system.

<span class="mw-page-title-main">SHELL model</span> Conceptual model for human error in aviation

In aviation, the SHELL model is a conceptual model of human factors that helps to clarify the location and cause of human error within an aviation environment.

The out-of-the-loop performance problem arises when an operator suffers from performance decrement as a consequence of automation. The potential loss of skills and of situation awareness caused by vigilance and complacency problems might make operators of automated systems unable to operate manually in case of system failure. Highly automated systems reduce the operator to monitoring role, which diminishes the chances for the operator to understand the system. It is related to mind wandering.

References

  1. Cummings, Mary (2004). "Automation Bias in Intelligent Time Critical Decision Support Systems" (PDF). AIAA 1st Intelligent Systems Technical Conference (PDF). doi:10.2514/6.2004-6313. ISBN   978-1-62410-080-2. S2CID   10328335. Archived from the original on 2014-11-01.{{cite book}}: CS1 maint: bot: original URL status unknown (link)
  2. Bruner, J. S., & Tagiuri, R. 1954. "The perception of people". In G. Lindzey (Ed.), Handbook of social psychology (vol 2): 634-654. Reading, MA: Addison-Wesley.
  3. Madhavan, P.; Wiegmann, D. A. (2007-07-01). "Similarities and differences between human–human and human–automation trust: an integrative review". Theoretical Issues in Ergonomics Science. 8 (4): 277–301. doi:10.1080/14639220500337708. S2CID   39064140.
  4. Dzindolet, Mary T.; Peterson, Scott A.; Pomranky, Regina A.; Pierce, Linda G.; Beck, Hall P. (2003). "The role of trust in automation reliance". International Journal of Human-Computer Studies. 58 (6): 697–718. doi:10.1016/S1071-5819(03)00038-7.
  5. 1 2 3 4 5 6 7 Skitka, Linda. "Automation". University of Illinois. University of Illinois at Chicago. Retrieved 16 January 2017.
  6. 1 2 3 Parasuraman, Raja; Riley, Victor (1997). "Humans and Automation: Use, Misuse, Disuse, Abuse". Human Factors: The Journal of the Human Factors and Ergonomics Society. 39 (2): 230–253. doi:10.1518/001872097778543886. S2CID   41149078.
  7. 1 2 3 4 Mosier, Kathleen; Skitka, Linda; Heers, Susan; Burdick, Mark (1997). "Automation Bias: Decision Making and Performance in High-Tech Cockpits". International Journal of Aviation Psychology. 8 (1): 47–63. doi:10.1207/s15327108ijap0801_3. PMID   11540946.
  8. 1 2 3 4 Lyell, David; Coiera, Enrico (August 2016). "Automation bias and verification complexity: a systematic review". Journal of the American Medical Informatics Association. 24 (2): 424–431. doi: 10.1093/jamia/ocw105 . PMC   7651899 . PMID   27516495.
  9. Tversky, A.; Kahneman, D. (1974). "Judgment under Uncertainty: Heuristics and Biases". Science. 185 (4157): 1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. PMID   17835457. S2CID   143452957.
  10. 1 2 Wickens, Christopher D.; Hollands, Justin G.; Banbury, Simon; Parasuraman, Raja (2015). Engineering Psychology and Human Performance (4th ed.). Psychology Press. pp. 335–338. ISBN   9781317351320.
  11. 1 2 3 Mosier, Kathleen L.; Dunbar, Melisa; McDonnell, Lori; Skitka, Linda J.; Burdick, Mark; Rosenblatt, Bonnie (1998). "Automation Bias and Errors: Are Teams Better than Individuals?". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 42 (3): 201–205. doi:10.1177/154193129804200304. S2CID   62603631.
  12. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Parasuraman, Raja; Manzey, Dietrich (June 2010). "Complacency and Bias in Human Use of Automation: An Attentional Integration". The Journal of the Human Factors and Ergonomics Society. 52 (3): 381–410. doi:10.1177/0018720810376055. PMID   21077562. S2CID   2279803 . Retrieved 17 January 2017.
  13. 1 2 3 4 5 6 7 8 9 10 11 12 13 Goddard, K.; Roudsari, A.; Wyatt, J. C. (2012). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". Journal of the American Medical Informatics Association. 19 (1): 121–127. doi:10.1136/amiajnl-2011-000089. PMC   3240751 . PMID   21685142.
  14. 1 2 Alberdi, Eugenio; Strigini, Lorenzo; Povyakalo, Andrey A.; Ayton, Peter (2009). "Why Are People's Decisions Sometimes Worse with Computer Support?" (PDF). Computer Safety, Reliability, and Security. Lecture Notes in Computer Science. Vol. 5775. Springer Berlin Heidelberg. pp. 18–31. doi:10.1007/978-3-642-04468-7_3. ISBN   978-3-642-04467-0.
  15. Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C. (2014). "Automation bias: Empirical results assessing influencing factors". International Journal of Medical Informatics. 83 (5): 368–375. doi:10.1016/j.ijmedinf.2014.01.001. PMID   24581700.
  16. 1 2 Mosier, Kathleen; Skitka, Linda; Dunbar, Melisa; McDonnell, Lori (November 13, 2009). "Aircrews and Automation Bias: The Advantages of Teamwork?". The International Journal of Aviation Psychology. 11 (1): 1–14. doi:10.1207/S15327108IJAP1101_1. S2CID   4132245.
  17. Endsley, Mica (2017). "From Here to Autonomy Lessons Learned From Human–Automation Research". Human Factors. 59 (1): 5–27. doi: 10.1177/0018720816681350 . PMID   28146676. S2CID   3771328.
  18. Bainbridge, Lisanne (1983). "Ironies of automation" (PDF). Automatica. 19 (6): 775–779. doi:10.1016/0005-1098(83)90046-8. S2CID   12667742. Archived from the original (PDF) on 2020-09-13. Retrieved 2019-12-07.
  19. Parasuraman, Raja; Molloy, Robert; Singh, Indramani L. (1993). "Performance Consequences of Automation-Induced 'Complacency'". The International Journal of Aviation Psychology. 3: 1–23. doi:10.1207/s15327108ijap0301_1.
  20. "Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian". www.ntsb.gov. Retrieved December 19, 2019.
  21. Bahner, J. Elin; Hüper, Anke-Dorothea; Manzey, Dietrich (2008). "Misuse of automated decision aids: Complacency, automation bias and the impact of training experience". International Journal of Human-Computer Studies. 66 (9): 688–699. doi:10.1016/j.ijhcs.2008.06.001.

Further reading