Collider (statistics)

Last updated

In statistics and causal graphs, a variable is a collider when it is causally influenced by two or more variables. The name "collider" reflects the fact that in graphical models, the arrow heads from variables that lead into the collider appear to "collide" on the node that is the collider. [1] They are sometimes also referred to as inverted forks. [2]

SEM model of a collider Collider(statistics).png
SEM model of a collider

The causal variables influencing the collider are themselves not necessarily associated. If they are not adjacent, the collider is unshielded. Otherwise, the collider is shielded and part of a triangle. [3]

The result of having a collider in the path is that the collider blocks the association between the variables that influence it. [4] [5] [6] Thus, the collider does not generate an unconditional association between the variables that determine it.

Conditioning on the collider via regression analysis, stratification, experimental design, or sample selection based on values of the collider creates a non-causal association between X and Y (Berkson's paradox). In the terminology of causal graphs, conditioning on the collider opens the path between X and Y. This will introduce bias when estimating the causal association between X and Y, potentially introducing associations where there are none. Colliders can therefore undermine attempts to test causal theories.[ citation needed ]

Colliders are sometimes confused with confounder variables. Unlike colliders, confounder variables should be controlled for when estimating causal associations.[ citation needed ]

To detect and manage collider bias, scholars have made use of directed acyclic graphs. [7]

Randomization and quasi-experimental research designs are not useful in overcoming collider bias. [7]

See also

Related Research Articles

<span class="mw-page-title-main">Simpson's paradox</span> Error in statistical reasoning with groups

Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Directed acyclic graph</span> Directed graph with no directed cycles

In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges, with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology to information science to computation (scheduling).

A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.

<span class="mw-page-title-main">Spurious relationship</span> Apparent, but false, correlation between causally-independent variables

In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor.

Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.

<span class="mw-page-title-main">Trygve Haavelmo</span> Norwegian economist and econometrician

Trygve Magnus Haavelmo, born in Skedsmo, Norway, was an economist whose research interests centered on econometrics. He received the Nobel Memorial Prize in Economic Sciences in 1989.

<span class="mw-page-title-main">Polytree</span>

In mathematics, and more specifically in graph theory, a polytree is a directed acyclic graph whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic.

External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times. Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

In statistics, ignorability is a feature of an experiment design whereby the method of data collection does not depend on the missing data. A missing data mechanism such as a treatment assignment or survey sampling strategy is "ignorable" if the missing data matrix, which indicates which variables are observed or missing, is independent of the missing data conditional on the observed data.

<span class="mw-page-title-main">Causal model</span> Conceptual model in philosophy of science

In the philosophy of science, a causal model is a conceptual model that describes the causal mechanisms of a system. Several types of causal notation may be used in the development of a causal model. Causal models can improve study designs by providing clear rules for deciding which independent variables need to be included/controlled for.

<span class="mw-page-title-main">Mediation (statistics)</span> Statistical model

In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable. Rather than a direct causal relationship between the independent variable and the dependent variable, which is often false, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables.

In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.

<span class="mw-page-title-main">James Robins</span>

James M. Robins is an epidemiologist and biostatistician best known for advancing methods for drawing causal inferences from complex observational studies and randomized trials, particularly those in which the treatment varies with time. He is the 2013 recipient of the Nathan Mantel Award for lifetime achievement in statistics and epidemiology, and a recipient of the 2022 Rousseeuw Prize in Statistics, jointly with Miguel Hernán, Eric Tchetgen-Tchetgen, Andrea Rotnitzky and Thomas Richardson.

In the statistical analysis of observational data, propensity score matching (PSM) is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that predict receiving the treatment. PSM attempts to reduce the bias due to confounding variables that could be found in an estimate of the treatment effect obtained from simply comparing outcomes among units that received the treatment versus those that did not. Paul R. Rosenbaum and Donald Rubin introduced the technique in 1983.

Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.

A graphoid is a set of statements of the form, "X is irrelevant to Y given that we know Z" where X, Y and Z are sets of variables. The notion of "irrelevance" and "given that we know" may obtain different interpretations, including probabilistic, relational and correlational, depending on the application. These interpretations share common properties that can be captured by paths in graphs. The theory of graphoids characterizes these properties in a finite set of axioms that are common to informational irrelevance and its graphical representations.

In statistics, econometrics, epidemiology, genetics and related disciplines, causal graphs are probabilistic graphical models used to encode assumptions about the data-generating process.

Causal analysis is the field of experimental design and statistical analysis pertaining to establishing cause and effect. Exploratory causal analysis (ECA), also known as data causality or causal discovery is the use of statistical algorithms to infer associations in observed data sets that are potentially causal under strict assumptions. ECA is a type of causal inference distinct from causal modeling and treatment effects in randomized controlled trials. It is exploratory research usually preceding more formal causal research in the same way exploratory data analysis often precedes statistical hypothesis testing in data analysis

References

  1. Hernan, Miguel A.; Robins, James M. (2010), Causal inference, Chapman & Hall/CRC monographs on statistics & applied probability, CRC, p. 70, ISBN   978-1-4200-7616-5
  2. Julia M. Rohrer (2018-07-02). "Thinking Clearly About Correlations and Causation: Graphical Causal Models for Observational Data". PsyArXiv. doi:10.31234/osf.io/t3qub. hdl: 21.11116/0000-0006-5734-E .
  3. Ali, R. Ayesha; Richardson, Thomas S.; Spirtes, Peter; Zhange, Jiji (2012). "Towards characterizing Markov equivalence classes for directed acyclic graphs with latent variables". Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI2006): 10–17. arXiv: 1207.1365 .
  4. Greenland, Sander; Pearl, Judea; Robins, James M. (January 1999), "Causal Diagrams for Epidemiologic Research" (PDF), Epidemiology, 10 (1): 37–48, doi:10.1097/00001648-199901000-00008, ISSN   1044-3983, OCLC   484244020, PMID   9888278
  5. Pearl, Judea (1986). "Fusion, Propagation and Structuring in Belief Networks". Artificial Intelligence. 29 (3): 241–288. CiteSeerX   10.1.1.84.8016 . doi:10.1016/0004-3702(86)90072-x.
  6. Pearl, Judea (1988). Probabilistic reasoning in intelligent systems: networks of plausible inference . Morgan Kaufmann.
  7. 1 2 Schneider, Eric B. (2020). "Collider bias in economic history research" (PDF). Explorations in Economic History. 78: 101356. doi:10.1016/j.eeh.2020.101356. ISSN   0014-4983. Archived from the original on April 11, 2024.