Analytic confidence is a rating employed by intelligence analysts to convey doubt to decision makers about a statement of estimative probability. The need for analytic confidence ratings arise from analysts' imperfect knowledge of a conceptual model. An analytic confidence rating pairs with a statement using a word of estimative probability to form a complete analytic statement. Scientific methods for determining analytic confidence remain in infancy.
In an effort to apply more rigorous standards to National Intelligence Estimates, the National Intelligence Council includes explanations of the three levels of analytic confidence made in estimative statements. [1]
Analytic confidence beginnings coincide with the cognitive psychology movement, especially in psychological decision theory. [2] This branch of psychology did not set out to study analytic confidence as it pertains to intelligence reporting. Rather, the advances in cognitive psychology established a groundwork for understanding well calibrated confidence levels in decision making. [2]
Early accounts of explaining analytic confidence focused on certainty forecasts, as opposed to the overall confidence the analyst had in the analysis itself. This highlights the degree of confusion among scholars about the difference between psychological and analytic confidence. [2] Analysts often lessened certainty statements when confronted with challenging analysis, instead of proscribing a level of analytic confidence to explain those concerns. By lessening certainty levels due to a lack of confidence, a dangerous possibility of misrepresenting the target existed.
The Intelligence Reform and Terrorism Prevention Act of 2004 establishes some guidelines for conveying the analytic confidence in an intelligence product. The summary document states each review should include, among other things, whether the product or products concerned were based on all sources of available intelligence, properly describe the quality and reliability of underlying sources, properly caveat and express uncertainties or confidence in analytic judgments, and properly distinguish between underlying intelligence and the assumptions and judgments of analysts. [3]
Mercyhurst University students use the Peterson Table of Analytic Confidence Assessment to determine the level of analytic confidence in their estimative statements. The table outlines certain areas in the intelligence cycle important to determining analytic confidence. The key areas of the table include the use of a structured method, overall source reliability, source corroboration and agreement, level of expertise on the subject or topic, amount of peer collaboration, task complexity, and time pressure. [2]
Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, and irrationality.
Concept testing is the process of using surveys to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market. It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising.
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.
National Intelligence Estimates (NIEs) are United States federal government documents that are the authoritative assessment of the Director of National Intelligence (DNI) on intelligence related to a particular national security issue. NIEs are produced by the National Intelligence Council and express the coordinated judgments of the United States Intelligence Community, the group of 18 U.S. intelligence agencies. NIEs are classified documents prepared for policymakers.
Intelligence analysis is the application of individual and collective cognitive methods to weigh data and test hypotheses within a secret socio-cultural context. The descriptions are drawn from what may only be available in the form of deliberately deceptive information; the analyst must correlate the similarities among deceptions and extract a common truth. Although its practice is found in its purest form inside national intelligence agencies, its methods are also applicable in fields such as business intelligence or competitive intelligence.
The analysis of competing hypotheses (ACH) is a methodology for evaluating multiple competing hypotheses for observed data. It was developed by Richards (Dick) J. Heuer, Jr., a 45-year veteran of the Central Intelligence Agency, in the 1970s for use by the Agency. ACH is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. ACH aims to help an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.
The following outline is provided as an overview of and topical guide to thought (thinking):
Audit evidence is evidence obtained by auditors during a financial audit and recorded in the audit working papers.
In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.
The wisdom of the crowd is the collective opinion of a diverse and independent group of individuals rather than that of a single expert. This process, while not new to the Information Age, has been pushed into the mainstream spotlight by social information sites such as Quora, Reddit, Stack Exchange, Wikipedia, Yahoo! Answers, and other web resources which rely on collective human knowledge. An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise.
Intelligence collection management is the process of managing and organizing the collection of intelligence from various sources. The collection department of an intelligence organization may attempt basic validation of what it collects, but is not supposed to analyze its significance. There is debate in U.S. intelligence community on the difference between validation and analysis, where the National Security Agency may try to interpret information when such interpretation is the job of another agency.
Standard-setting study is an official research study conducted by an organization that sponsors tests to determine a cutscore for the test. To be legally defensible in the US, in particular for high-stakes assessments, and meet the Standards for Educational and Psychological Testing, a cutscore cannot be arbitrarily determined; it must be empirically justified. For example, the organization cannot merely decide that the cutscore will be 70% correct. Instead, a study is conducted to determine what score best differentiates the classifications of examinees, such as competent vs. incompetent. Such studies require quite an amount of resources, involving a number of professionals, in particular with psychometric background. Standard-setting studies are for that reason impractical for regular class room situations, yet in every layer of education, standard setting is performed and multiple methods exist.
Words of estimative probability are terms used by intelligence analysts in the production of analytic reports to convey the likelihood of a future event occurring. A well-chosen WEP gives a decision maker a clear and unambiguous estimate upon which to base a decision. Ineffective WEPs are vague or misleading about the likelihood of an event. An ineffective WEP places the decision maker in the role of the analyst, increasing the likelihood of poor or snap decision making. Some intelligence and policy failures appear to be related to the imprecise use of estimative words.
The target-centric approach to intelligence is a method of intelligence analysis that Robert M. Clark introduced in his book "Intelligence Analysis: A Target-Centric Approach" in 2003 to offer an alternative methodology to the traditional intelligence cycle. Its goal is to redefine the intelligence process in such a way that all of the parts of the intelligence cycle come together as a network. It is a collaborative process where collectors, analysts and customers are integral, and information does not always flow linearly.
The Technique for human error-rate prediction (THERP) is a technique that is used in the field of Human Reliability Assessment (HRA) to evaluate the probability of human error occurring throughout the completion of a task. From such an analysis, some corrective measures could be taken to reduce the likelihood of errors occurring within a system. The overall goal of THERP is to apply and document probabilistic methodological analyses to increase safety during a given process. THERP is used in fields such as error identification, error quantification and error reduction.
Richards "Dick" J. Heuer, Jr. was a CIA veteran of 45 years and most known for his work on analysis of competing hypotheses and his book, Psychology of Intelligence Analysis. The former provides a methodology for overcoming intelligence biases while the latter outlines how mental models and natural biases impede clear thinking and analysis. Throughout his career, he worked in collection operations, counterintelligence, intelligence analysis and personnel security. In 2010 he co-authored a book with Randolph (Randy) H. Pherson titled Structured Analytic Techniques for Intelligence Analysis.
The heuristic-systematic model of information processing (HSM) is a widely recognized model by Shelly Chaiken that attempts to explain how people receive and process persuasive messages.
A superforecaster is a person who makes forecasts that can be shown by statistical means to have been consistently more accurate than the general public or experts. Superforecasters sometimes use modern analytical and statistical methodologies to augment estimates of base rates of events; research finds that such forecasters are typically more accurate than experts in the field who do not use analytical and statistical techniques, though this has been overstated in some sources. The term "superforecaster" is a trademark of Good Judgment Inc.