A description error or selection error is an error, or more specifically a human error, that occurs when a person performs the correct action on the wrong object due to insufficient specification of an action which would have led to a desired result. This commonly happens when similar actions lead to different results. A typical example is a panel with rows of identical switches, where it is easy to carry out a correct action (flip a switch) on a wrong switch due to their insufficient differentiation. [1]
This error can be very disorienting and usually causes a brief loss of situation awareness or automation surprise if noticed right away. But much worse, if it goes unnoticed, it could cause more serious problems. So allowances such as clearly highlighting a selected item should be made in interaction design.
Donald Norman describes the subject in his book The Design of Everyday Things . There he describes how user-centered design can help account for human limitations that can lead to errors like description errors. James Reason also covers the subject in his book Human Error.
A software bug is an error, flaw or fault in the design, development, or operation of computer software that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and correcting bugs is termed "debugging" and often uses formal techniques or tools to pinpoint bugs. Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations.
An error is an inaccurate or incorrect action, thought, or judgement.
Usability engineering is a professional discipline that focuses on improving the usability of interactive systems. It draws on theories from computer science and psychology to define problems that occur during the use of such a system. Usability Engineering involves the testing of designs at various stages of the development process, with users or with usability experts. The history of usability engineering in this context dates back to the 1980s. In 1988, authors John Whiteside and John Bennett—of Digital Equipment Corporation and IBM, respectively—published material on the subject, isolating the early setting of goals, iterative evaluation, and prototyping as key activities. The usability expert Jakob Nielsen is a leader in the field of usability engineering. In his 1993 book Usability Engineering, Nielsen describes methods to use throughout a product development process—so designers can ensure they take into account the most important barriers to learnability, efficiency, memorability, error-free use, and subjective satisfaction before implementing the product. Nielsen’s work describes how to perform usability tests and how to use usability heuristics in the usability engineering lifecycle. Ensuring good usability via this process prevents problems in product adoption after release. Rather than focusing on finding solutions for usability problems—which is the focus of a UX or interaction designer—a usability engineer mainly concentrates on the research phase. In this sense, it is not strictly a design role, and many usability engineers have a background in computer science because of this. Despite this point, its connection to the design trade is absolutely crucial, not least as it delivers the framework by which designers can work so as to be sure that their products will connect properly with their target usership.
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
In psychoanalysis, a Freudian slip, also called parapraxis, is an error in speech, memory, or physical action that occurs due to the interference of an unconscious subdued wish or internal train of thought. Classical examples involve slips of the tongue, but psychoanalytic theory also embraces misreadings, mishearings, mistypings, temporary forgettings, and the mislaying and losing of objects.
Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.
A glitch is a short-lived fault in a system, such as a transient fault that corrects itself, making it difficult to troubleshoot. The term is particularly common in the computing and electronics industries, in circuit bending, as well as among players of video games. More generally, all types of systems including human organizations and nature experience glitches.
User-centered design (UCD) or user-driven development (UDD) is a framework of process in which usability goals, user characteristics, environment, tasks and workflow of a product, service or process are given extensive attention at each stage of the design process. These tests are conducted with/without actual users during each stage of the process from requirements, pre-production models and post production, completing a circle of proof back to and ensuring that "development proceeds with the user as the center of focus." Such testing is necessary as it is often very difficult for the designers of a product to understand intuitively the first-time users of their design experiences, and what each user's learning curve may look like. User-centered design is based on the understanding of a user, their demands, priorities and experiences and when used, is known to lead to an increased product usefulness and usability as it delivers satisfaction to the user.
On Aggression is a 1963 book by the ethologist Konrad Lorenz; it was translated into English in 1966. As he writes in the prologue, "the subject of this book is aggression, that is to say the fighting instinct in beast and man which is directed against members of the same species."
In human computer interaction, the gulf of execution is the gap between a user's goal for action and the means to execute that goal. Usability has as one of its primary goals to reduce this gap by removing roadblocks and steps that cause extra thinking and actions that distract the user's attention from the task intended, thereby preventing the flow of his or her work, and decreasing the chance of successful completion of the task. Similarly, there is a gulf of evaluation that applies to the gap between an external stimulus and the time a person understands what it means. Both phrases are first mentioned in Donald Norman's 1986 book User Centered System Design: New Perspectives on Human-computer Interaction.
A checklist is a type of job aid used in repetitive tasks to reduce failure by compensating for potential limits of human memory and attention. Checklists are used both to ensure that safety-critical system preparations are carried out completely and in the correct order, and in less critical applications to ensure that no step is left out of a procedure. they help to ensure consistency and completeness in carrying out a task. A basic example is the "to do list". A more advanced checklist would be a schedule, which lays out tasks to be done according to time of day or other factors, or a pre-flight checklist for an airliner, which should ensure a safe take-off.
The Seven Sins of Memory: How the Mind Forgets and Remembers is a book by Daniel Schacter, former chair of Harvard University's Psychology Department and a leading memory researcher.
Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.
In user interface design, a mode is a distinct setting within a computer program or any physical machine interface, in which the same user input will produce perceived results different from those that it would in other settings. Modal interface components include the Caps lock and Insert keys on the standard computer keyboard, both of which typically put the user's typing into a different mode after being pressed, then return it to the regular mode after being re-pressed.
Pilot error generally refers to an accident in which an action or decision made by the pilot was the cause or a contributing factor that led to the accident, but also includes the pilot's failure to make a correct decision or take proper action. Errors are intentional actions that fail to achieve their intended outcomes. The Chicago Convention defines the term "accident" as "an occurrence associated with the operation of an aircraft [...] in which [...] a person is fatally or seriously injured [...] except when the injuries are [...] inflicted by other persons." Hence the definition of "pilot error" does not include deliberate crashing.
Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power, aviation, space exploration, and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.
Seven stages of action is a term coined by the usability consultant Donald Norman. The phrase appears in chapter two of his book The Design of Everyday Things, describing the psychology of a person performing a task.
Human factors are the physical or cognitive properties of individuals, or social behavior which is specific to humans, and influence functioning of technological systems as well as human-environment equilibria. The safety of underwater diving operations can be improved by reducing the frequency of human error and the consequences when it does occur. Human error can be defined as an individual's deviation from acceptable or desirable practice which culminates in undesirable or unexpected results.
Dive safety is primarily a function of four factors: the environment, equipment, individual diver performance and dive team performance. The water is a harsh and alien environment which can impose severe physical and psychological stress on a diver. The remaining factors must be controlled and coordinated so the diver can overcome the stresses imposed by the underwater environment and work safely. Diving equipment is crucial because it provides life support to the diver, but the majority of dive accidents are caused by individual diver panic and an associated degradation of the individual diver's performance. - M.A. Blumenberg, 1996
Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that showed that people assign more positive evaluations to decisions made by humans than to a neutral object. The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids to mostly factor out possible human error. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role but able to make decisions. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.
A guess is a swift conclusion drawn from data directly at hand, and held as probable or tentative, while the person making the guess admittedly lacks material for a greater degree of certainty. A guess is also an unstable answer, as it is "always putative, fallible, open to further revision and interpretation, and validated against the horizon of possible meanings by showing that one interpretation is more probable than another in light of what we already know". In many of its uses, "the meaning of guessing is assumed as implicitly understood", and the term is therefore often used without being meticulously defined. Guessing may combine elements of deduction, induction, abduction, and the purely random selection of one choice from a set of given options. Guessing may also involve the intuition of the guesser, who may have a "gut feeling" about which answer is correct without necessarily being able to articulate a reason for having this feeling.