Analytic confidence

Last updated
Cover of a National Intelligence Estimate NIE cover Iran 2007.jpg
Cover of a National Intelligence Estimate

Analytic confidence is a rating employed by intelligence analysts to convey doubt to decision makers about a statement of estimative probability. The need for analytic confidence ratings arise from analysts' imperfect knowledge of a conceptual model. An analytic confidence rating pairs with a statement using a word of estimative probability to form a complete analytic statement. Scientific methods for determining analytic confidence remain in infancy.

Contents

Levels of analytic confidence in national security reports

In an effort to apply more rigorous standards to National Intelligence Estimates, the National Intelligence Council includes explanations of the three levels of analytic confidence made in estimative statements. [1]

Origins and early history

Analytic confidence beginnings coincide with the cognitive psychology movement, especially in psychological decision theory. [2] This branch of psychology did not set out to study analytic confidence as it pertains to intelligence reporting. Rather, the advances in cognitive psychology established a groundwork for understanding well calibrated confidence levels in decision making. [2]

Early accounts of explaining analytic confidence focused on certainty forecasts, as opposed to the overall confidence the analyst had in the analysis itself. This highlights the degree of confusion among scholars about the difference between psychological and analytic confidence. [2] Analysts often lessened certainty statements when confronted with challenging analysis, instead of proscribing a level of analytic confidence to explain those concerns. By lessening certainty levels due to a lack of confidence, a dangerous possibility of misrepresenting the target existed.

Intelligence Reform and Terrorism Prevention Act of 2004

The Intelligence Reform and Terrorism Prevention Act of 2004 establishes some guidelines for conveying the analytic confidence in an intelligence product. The summary document states each review should include, among other things, whether the product or products concerned were based on all sources of available intelligence, properly describe the quality and reliability of underlying sources, properly caveat and express uncertainties or confidence in analytic judgments, and properly distinguish between underlying intelligence and the assumptions and judgments of analysts. [3]

Mercyhurst College

A visual representation of the Peterson Table Peterson Table Visual.png
A visual representation of the Peterson Table

Mercyhurst College students use the Peterson Table of Analytic Confidence Assessment to determine the level of analytic confidence in their estimative statements. The table outlines certain areas in the intelligence cycle important to determining analytic confidence. The key areas of the table include the use of a structured method, overall source reliability, source corroboration and agreement, level of expertise on the subject or topic, amount of peer collaboration, task complexity, and time pressure. [2]

Related Research Articles

Analysis Process of understanding a complex topic or substance

Analysis is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle, though analysis as a formal concept is a relatively recent development.

Cognitive bias Systematic pattern of deviation from norm or rationality in judgment

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.

Concept testing is the process of using surveys to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market. It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising.

Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. An FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

The overconfidence effect is a well-established bias in which a person's subjective confidence in his or her judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high. Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.

National Intelligence Estimates (NIEs) are United States federal government documents that are the authoritative assessment of the Director of National Intelligence (DNI) on intelligence related to a particular national security issue. NIEs are produced by the National Intelligence Council and express the coordinated judgments of the United States Intelligence Community, the group of 18 U.S. intelligence agencies. NIEs are classified documents prepared for policymakers.

Intelligence analysis is the application of individual and collective cognitive methods to weigh data and test hypotheses within a secret socio-cultural context. The descriptions are drawn from what may only be available in the form of deliberately deceptive information; the analyst must correlate the similarities among deceptions and extract a common truth. Although its practice is found in its purest form inside national intelligence agencies, its methods are also applicable in fields such as business intelligence or competitive intelligence.

Analysis of competing hypotheses Process to evaluate alternative hypotheses

The analysis of competing hypotheses (ACH) is a methodology for evaluating multiple competing hypotheses for observed data. It was developed by Richards (Dick) J. Heuer, Jr., a 45-year veteran of the Central Intelligence Agency, in the 1970s for use by the Agency. ACH is used by analysts in various fields who make judgments that entail a high risk of error in reasoning. ACH aims to help an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.

In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

Credit analysis is the method by which one calculates the creditworthiness of a business or organization. In other words, It is the evaluation of the ability of a company to honor its financial obligations. The audited financial statements of a large company might be analyzed when it issues or has issued bonds. Or, a bank may analyze the financial statements of a small business before making or renewing a commercial loan. The term refers to either case, whether the business is large or small. A credit analyst is the finance professional undertaking this role.

Intelligence collection management is the process of managing and organizing the collection of intelligence from various sources. The collection department of an intelligence organization may attempt basic validation of what it collects, but is not supposed to analyze its significance. There is debate in U.S. intelligence community on the difference between validation and analysis, where the National Security Agency may try to interpret information when such interpretation is the job of another agency.

Words of estimative probability are terms used by intelligence analysts in the production of analytic reports to convey the likelihood of a future event occurring. A well-chosen WEP gives a decision maker a clear and unambiguous estimate upon which to base a decision. Ineffective WEPs are vague or misleading about the likelihood of an event. An ineffective WEP places the decision maker in the role of the analyst, increasing the likelihood of poor or snap decision making. Some intelligence and policy failures appear to be related to the imprecise use of estimative words.

The target-centric approach to intelligence is a method of intelligence analysis that Robert M. Clark introduced in his book "Intelligence Analysis: A Target-Centric Approach" in 2003 to offer an alternative methodology to the traditional intelligence cycle. Its goal is to redefine the intelligence process in such a way that all of the parts of the intelligence cycle come together as a network. It is a collaborative process where collectors, analysts and customers are integral, and information does not always flow linearly.

Human Cognitive Reliability Correlation (HCR) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.

Richards "Dick" J. Heuer, Jr. was a CIA veteran of 45 years and most known for his work on analysis of competing hypotheses and his book, Psychology of Intelligence Analysis.The former provides a methodology for overcoming intelligence biases while the latter outlines how mental models and natural biases impede clear thinking and analysis. Throughout his career, he worked in collection operations, counterintelligence, intelligence analysis and personnel security. In 2010 he co-authored a book with Randolph (Randy) H. Pherson titled Structured Analytic Techniques for Intelligence Analysis.

The heuristic-systematic model of information processing (HSM) is a widely recognized model by Shelly Chaiken that attempts to explain how people receive and process persuasive messages. The model states that individuals can process messages in one of two ways: heuristically or systematically. Whereas systematic processing entails careful and deliberative processing of a message, heuristic processing entails the use of simplifying decision rules or ‘heuristics’ to quickly assess the message content. The guiding belief with this model is that individuals are more apt to minimize their use of cognitive resources, thus affecting the intake and processing of messages. HSM predicts that processing type will influence the extent to which a person is persuaded or exhibits lasting attitude change. HSM is quite similar to the elaboration likelihood model, or ELM. Both models were predominantly developed in the early to mid-1980s and share many of the same concepts and ideas.

Analytical quality control, commonly shortened to AQC, refers to all those processes and procedures designed to ensure that the results of laboratory analysis are consistent, comparable, accurate and within specified limits of precision. Constituents submitted to the analytical laboratory must be accurately described to avoid faulty interpretations, approximations, or incorrect results. The qualitative and quantitative data generated from the laboratory can then be used for decision making. In the chemical sense, quantitative analysis refers to the measurement of the amount or concentration of an element or chemical compound in a matrix that differs from the element or compound. Fields such as industry, medicine, and law enforcement can make use of AQC.

Cognitive bias mitigation is the prevention and reduction of the negative effects of cognitive biases – unconscious, automatic influences on human judgment and decision making that reliably produce reasoning errors.

A superforecaster is a person who makes forecasts that can be shown by statistical means to be consistently more accurate than the general public or experts. Superforecasters sometimes use modern analytical and statistical methodologies to augment estimates of base rates of events; research finds that such forecasters are typically more accurate than experts in the field who do not use analytical and statistical techniques.

References