Process tracing

Last updated

Process tracing is a qualitative research method used to develop and test theories. [1] [2] [3] . Process-tracing can be defined as the following: it is the systematic examination of diagnostic evidence selected and analyzed in light of research questions and hypotheses posed by the investigator (Collier, 2011). Process-tracing thus focuses on (complex) causal relationships between the independent variable(s) and the outcome of the dependent variable(s), evaluates pre-existing hypotheses and discovers new ones. It is generally understood as a "within-case" method to draw inferences on the basis of causal mechanisms, but it can also be used for ideographic research or small-N case-studies. [4] [5] It has been used in social sciences (such as in psychology [2] ), as well as in natural sciences. [5]

Contents

Scholars that use process tracing evaluate the weight of evidence on the basis of the strength of tests (notably straw-in-the-wind tests, hoop tests, smoking gun tests, double decisive tests). [5] As a consequence, what matters is not solely the quantity of observations, but the quality and manner of observations. [5] [6] By using Bayesian probability, it may be possible to make strong causal inferences from a small sliver of data through process tracing. [5] [7] As a result, process tracing is a prominent case study method. [8] Process tracing can be used to study one or a few cases, in order to determine the changes that have occurred over time within these cases and causal mechanisms are responsible for this change. [1]

Process tracing

Process-tracing can be used both for inductive (theory-generating) and deductive (theory-testing) purposes. [5] Process tracing can be divided into three variants. Although all variants trace causal mechanisms, it is necessary to consider these variants in order to align our practices with what we preach. [9] The three variants of process tracing are "theory-testing process tracing," "theory-building process tracing," and "explaining outcome process tracing”. Among themselves, these variants differ from each other on the fact that they are theory- or case-based designs, they test or build theoretical causal mechanisms, they understand the generality of causal mechanisms differently, and they make different inferences. [9] In 'theory-testing process tracing,' the goal is to test existing theories and the causal mechanisms assumed therein. [9] [10] On the contrary, 'theory-building process tracing' involves constructing a theory about a causal mechanism that can be applied to a broader population of a particular phenomenon. [9] Through empirical evidence, a theoretical explanation is developed about causal mechanisms. [10] In "explaining outcome process tracing," it is not about testing or building a theoretical mechanism, but it is about finding a satisfactory explanation for a given outcome. [9] This variant constructs a detailed narrative that explains the process through which a specific outcome or series of events came to be. [10]

Process-tracing differs from other qualitative analysis methods because of its focus on "how" causal mechanisms work; other qualitative analysis methods focuses at the correlation between the dependent and independent variable (Beach & Pedersen, 2012). Process-tracing looks beyond the correlation of two variables.

In terms of theory-testing, the process-tracing method works by presenting the observable implications (hypotheses) of a theory, as well as alternative explanations that are inconsistent with the theory. These observable implications and alternative explanations are based on theory-based hypotheses and key events. [11] Once these observable implications are presented, they are then tested empirically to see which of the observable implications can be observed and which cannot. [1] [12] It is also important to test if alternative explanations are present. [11] Process-tracing emphasizes the temporal sequence of events, and requires fine-grained case knowledge. [1]

For testing the hypothetical theories, there are different types of requirements within a causal mechanism. There are necessary requirements, where the presence of one variable will always lead to the effect on the dependent variable. [1] This means that the lack of the necessary requirement will also mean a lack of the rest of the mechanism. The second type of requirement is a sufficient requirement, where the presence of the requirement confirms the existence of a possible mechanism. [1] Stephen Van Evera's influential typology of process-tracing tests distinguishes tests depending on how they adjudicate between theoretical expectations: [5] [13]

It is often used to complement comparative case study methods. By tracing the causal process from the independent variable of interest to the dependent variable, it may be possible to rule out potentially intervening variables in imperfectly matched cases. This can create a stronger basis for attributing causal significance to the remaining independent variables. [15]

A limitation to process-tracing is the problem of infinite regress. [16] [17] While some influential works by methods scholars have argued that the ability of process-tracing to make causal claims is limited by low degrees of freedom, [18] methodologists widely reject that the "degrees of freedom" problem applies to research that uses process-tracing, given that qualitative research entails different logics than quantitative research (where scholars do need to be wary of degrees of freedom). [16] [5] . Some other disadvantages are:

One advantage to process-tracing over quantitative methods is that process-tracing provides inferential leverage. [1] In addition to aiding uncovering and testing causal mechanisms, process-tracing also contributes descriptive richness. [1] . In addition to that, process-tracing can also present the contextual conditions within certain processes take place. [19] Another important advantage is that process tracing can deal with theoretical pluralism, which means hypotheses or conceptual models have multiple (un)dependent variables and causal relationships. This method of analysis is therefore suitable for understanding inherent complexity (Kay & Baker, 2015). The reason why process-tracing differs from other qualitative research methods is also an advantage.

By assigning probabilities to outcomes under specific conditions, scholars can use Bayesian rules in their process tracing to draw robust conclusions about the causes of outcomes. [20] [21] [5] [8] [22] [23] [7] For example, if a scholar's theory assumes that a number of observable implications will happen under certain conditions, then the repeated occurrence of those outcomes under the theorized conditions lends strong support for the scholar's theory because the observed outcomes would be improbable to occur in the manner expected by the scholar if the theory were false. [20] By using Bayesian probability, it may be possible to make strong causal inferences from a small sliver of data. [5] For example, a video recording of a person committing a bank robbery can be very strong evidence that a particular person committed the robbery while also ruling out that other potential suspects did it, even if it is only a single piece of evidence. [5]

Scholars can also use set theory in their process tracing. [24]

See also

Related Research Articles

<span class="mw-page-title-main">Experiment</span> Scientific procedure performed to validate a hypothesis

An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

A case study is an in-depth, detailed examination of a particular case within a real-world context. For example, case studies in medicine may focus on an individual patient or ailment; case studies in business might cover a particular firm's strategy or a broader market; similarly, case studies in politics can range from a narrow happening over time like the operations of a specific political campaign, to an enormous undertaking like world war, or more often the policy analysis of real-world problems affecting multiple stakeholders.

<span class="mw-page-title-main">Quantitative research</span> All procedures for the numerical representation of empirical facts

Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials. More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.

<span class="mw-page-title-main">Mathematical statistics</span> Branch of statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

<span class="mw-page-title-main">Data dredging</span> Misuse of data analysis

Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.

Exploratory research is "the preliminary research to clarify the exact nature of the problem to be solved." It is used to ensure additional research is taken into consideration during an experiment as well as determining research priorities, collecting data and honing in on certain subjects which may be difficult to take note of without exploratory research. It can include techniques, such as:

External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times. Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study.

Stephen William Van Evera is a professor of Political Science at the Massachusetts Institute of Technology, specializing in international relations. His research includes U.S. foreign and national security policy as well as causes and prevention of war. He is a member of the Council on Foreign Relations.

<span class="mw-page-title-main">Research design</span> Overall strategy utilized to carry out research

Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers.

<span class="mw-page-title-main">Observational study</span> Study with uncontrolled variable of interest

In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.

In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data.

Designing Social Inquiry: Scientific Inference in Qualitative Research is an influential 1994 book written by Gary King, Robert Keohane, and Sidney Verba that lays out guidelines for conducting qualitative research. The central thesis of the book is that qualitative and quantitative research share the same "logic of inference." The book primarily applies lessons from regression-oriented analysis to qualitative research, arguing that the same logics of causal inference can be used in both types of research.

<span class="mw-page-title-main">Comparative historical research</span> Method in the social sciences

Comparative historical research is a method of social science that examines historical events in order to create explanations that are valid beyond a particular time and place, either by direct comparison to other historical events, theory building, or reference to the present day. Generally, it involves comparisons of social processes across times and places. It overlaps with historical sociology. While the disciplines of history and sociology have always been connected, they have connected in different ways at different times. This form of research may use any of several theoretical orientations. It is distinguished by the types of questions it asks, not the theoretical framework it employs.

In statistics, qualitative comparative analysis (QCA) is a data analysis based on set theory to examine the relationship of conditions to outcome. QCA describes the relationship in terms of necessary conditions and sufficient conditions. The technique was originally developed by Charles Ragin in 1987 to study data sets that are too small for linear regression analysis but large for cross-case analysis.

Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.

Deductive pragmatism is a research method aiming at helping researchers communicate qualitative assumptions about cause-effect relationships (causality), elucidate the ramifications of such assumptions and drive causal inferences from a combination of assumptions, experiments, observations and case studies.

Necessary condition analysis (NCA) is a research approach and tool employed to discern "necessary conditions" within datasets. These indispensable conditions stand as pivotal determinants of particular outcomes, wherein the absence of such conditions ensures the absence of the intended result. Illustratively, the admission of a student into a Ph.D. program necessitates an adequate GMAT score; the progression of AIDS mandates the presence of HIV; and the realization of organizational change will not occur without the commitment of management. Singular in nature, these conditions possess the potential to function as bottlenecks for the desired outcome. Their absence unequivocally guarantees the failure of the intended objective, a deficiency that cannot be offset by the influence of other contributing factors. It is noteworthy, however, that the mere presence of the necessary condition does not ensure the assured attainment of success. In such instances, the condition demonstrates its necessity but lacks sufficiency. To obviate the risk of failure, the simultaneous satisfaction of each distinct necessary condition is imperative. NCA serves as a systematic mechanism, furnishing the rationale and methodological apparatus requisite for the identification and assessment of necessary conditions within extant or novel datasets. It is a powerful method for investigating causal relationships and determining the minimum requirements that must be present for an outcome to be achieved.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Collier, David (2011). "Understanding Process Tracing". PS: Political Science & Politics. 44 (4): 823–830. doi: 10.1017/s1049096511001429 . ISSN   1049-0965.
  2. 1 2 Schulte-Mecklenbeck, M.; Kühberger, A.; Ranyard, R., eds. (2011). A Handbook of Process Tracing Methods for Decision Research: A Critical Review and User's Guide. New York: Taylor & Francis.
  3. Ford, J. Kevin; Schmitt, Neal; Schechtman, Susan L.; Hults, Brian M.; Doherty, Mary L. (1989). "Process Tracing Methods: Contributions, Problems, and Neglected Research Questions". Organizational Behavior and Human Decision Processes. 43 (1): 75–117. doi:10.1016/0749-5978(89)90059-9.
  4. Jacobs, Alan M.; Büthe, Tim; Arjona, Ana; Arriola, Leonardo R.; Bellin, Eva; Bennett, Andrew; Björkman, Lisa; Bleich, Erik; Elkins, Zachary; Fairfield, Tasha; Gaikwad, Nikhar (2021). "The Qualitative Transparency Deliberations: Insights and Implications". Perspectives on Politics. Supplementary materials, Pt 2: 171–208. doi:10.1017/S1537592720001164. ISSN   1537-5927. S2CID   232050726.
  5. 1 2 3 4 5 6 7 8 9 10 11 Bennett, Andrew (2008). Box-Steffensmeier, Janet M; Brady, Henry E; Collier, David (eds.). "Process Tracing: a Bayesian Perspective". The Oxford Handbook of Political Methodology. doi:10.1093/oxfordhb/9780199286546.001.0001. ISBN   978-0-19-928654-6.
  6. Gerring, John (2007). Case Study Research: Principles and Practices. Cambridge University Press. pp. 173, 180. ISBN   978-0-521-85928-8.
  7. 1 2 Fairfield, Tasha; Charman, Andrew E. (2022). Social Inquiry and Bayesian Inference. Cambridge University Press. ISBN   978-1-108-42164-5.
  8. 1 2 Mahoney, James (2016-09-02). "Mechanisms, Bayesianism, and process tracing". New Political Economy. 21 (5): 493–499. doi:10.1080/13563467.2016.1201803. ISSN   1356-3467. S2CID   156167903.
  9. 1 2 3 4 5 Beach, Derek; Pedersen, Rasmus Brun (2013). Process-tracing methods: foundations and guidelines. Ann Arbor: The University of Michigan Press. ISBN   978-0-472-05189-2.
  10. 1 2 3 Kay, Adrian; Baker, Phillip (2015). "What Can Causal Process Tracing Offer to Policy Studies? A Review of the Literature". Policy Studies Journal. 43 (1): 1–21. doi:10.1111/psj.12092.
  11. 1 2 Ricks, Jacob I.; Liu, Amy H. (2018). "Process-Tracing Research Designs: A Practical Guide". PS: Political Science & Politics. 51 (4): 842–846. doi:10.1017/S1049096518000975. ISSN   1049-0965.
  12. King, Gary; Keohane, Robert O.; Verba, Sidney. Designing Social Inquiry .
  13. Van Evera, Stephen (1997). Guide to Methods for Students of Political Science. Cornell University Press. ISBN   978-0-8014-5444-8. JSTOR   10.7591/j.ctvrf8bm7.
  14. 1 2 Mahoney, James (November 2012). "The Logic of Process Tracing Tests in the Social Sciences". Sociological Methods & Research. 41 (4): 570–597. doi:10.1177/0049124112437709. ISSN   0049-1241.
  15. George, Alexander L.; Bennett, Andrew (2005). Case studies and theory development in the social sciences. London: MIT Press. pp. 214–15. ISBN   0-262-57222-2.
  16. 1 2 Bennett, Andrew (2010). "Process Tracing and Causal Inference" in Rethinking social inquiry diverse tools, shared standards. Rowman & Littlefield Publishers. ISBN   978-1-4422-0343-3. OCLC   787870333.
  17. Verghese, Ajay (2020-02-24), Who's Afraid of Infinite Regress? A Process-Tracing Exercise, Rochester, NY, SSRN   3484930 {{citation}}: CS1 maint: location missing publisher (link)
  18. King, Gary; Keohane, Robert O.; Verba, Sidney (1994). Designing Social Inquiry. Princeton, New Jersey: Princeton University Press. p. 86. doi:10.1515/9781400821211. ISBN   978-1-4008-2121-1.
  19. Beach, Derek (2017-01-25), "Process Tracing Methods in the Social Sciences", Oxford Research Encyclopedia of Politics, doi:10.1093/acrefore/9780190228637.013.176, ISBN   978-0-19-022863-7 , retrieved 2024-03-05
  20. 1 2 Humphreys, Macartan; Jacobs, Alan M. (2015). "Mixing Methods: A Bayesian Approach". American Political Science Review. 109 (4): 654. doi:10.1017/s0003055415000453. ISSN   0003-0554. S2CID   1846974.
  21. Fairfield, Tasha; Charman, Andrew E. (2017). "Explicit Bayesian Analysis for Process Tracing: Guidelines, Opportunities, and Caveats". Political Analysis. 25 (3): 363–380. doi:10.1017/pan.2017.14. ISSN   1047-1987. S2CID   8862619.
  22. Bennett, Andrew (2014), Bennett, Andrew; Checkel, Jeffrey T. (eds.), "Disciplining our conjectures", Process Tracing: From Metaphor to Analytic Tool, Strategies for Social Inquiry, Cambridge University Press, pp. 276–298, ISBN   978-1-107-04452-4
  23. Bennett, Andrew; Charman, Andrew E.; Fairfield, Tasha (2021). "Understanding Bayesianism: Fundamentals for Process Tracers". Political Analysis. 30 (2): 298–305. doi:10.1017/pan.2021.23. ISSN   1047-1987.
  24. Barrenechea, Rodrigo; Mahoney, James (2019-08-01). "A Set-Theoretic Approach to Bayesian Process Tracing". Sociological Methods & Research. 48 (3): 451–484. doi:10.1177/0049124117701489. ISSN   0049-1241. S2CID   126255778.

Further reading