Selection bias

Last updated

Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. [1] It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.

Contents

Types of bias

Sampling bias

Sampling bias is systematic error due to a non-random sample of a population, [2] causing some members of the population to be less likely to be included than others, resulting in a biased sample, defined as a statistical sample of a population (or non-human factors) in which all participants are not equally balanced or objectively represented. [3] It is mostly classified as a subtype of selection bias, [4] sometimes specifically termed sample selection bias, [5] [6] [7] but some classify it as a separate type of bias. [8]

A distinction of sampling bias (albeit not a universally accepted one) is that it undermines the external validity of a test (the ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.

Examples of sampling bias include self-selection, pre-screening of trial participants, discounting trial subjects/tests that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the study area, length-time bias, where slowly developing disease with better prognosis is detected, and lead time bias, where disease is diagnosed earlier participants than in comparison populations, although the average course of disease is the same.

Time interval

Exposure

Data

Studies

Attrition

Attrition bias is a kind of selection bias caused by attrition (loss of participants), [13] discounting trial subjects/tests that did not run to completion. It is closely related to the survivorship bias, where only the subjects that "survived" a process are included in the analysis or the failure bias, where only the subjects that "failed" a process are included. It includes dropout, nonresponse (lower response rate), withdrawal and protocol deviators. It gives biased results where it is unequal in regard to exposure and/or outcome. For example, in a test of a dieting program, the researcher may simply reject everyone who drops out of the trial, but most of those who drop out are those for whom it was not working. Different loss of subjects in intervention and comparison group may change the characteristics of these groups and outcomes irrespective of the studied intervention. [13]

Lost to follow-up, is another form of Attrition bias, mainly occurring in medicinal studies over a lengthy time period. Non-Response or Retention bias can be influenced by a number of both tangible and intangible factors, such as; wealth, education, altruism, initial understanding of the study and its requirements. [14] Researchers may also be incapable of conducting follow-up contact resulting from inadequate identifying information and contact details collected during the initial recruitment and research phase. [15]

Observer selection

Philosopher Nick Bostrom has argued that data are filtered not only by study design and measurement, but by the necessary precondition that there has to be someone doing a study. In situations where the existence of the observer or the study is correlated with the data, observation selection effects occur, and anthropic reasoning is required. [16]

An example is the past impact event record of Earth: if large impacts cause mass extinctions and ecological disruptions precluding the evolution of intelligent observers for long periods, no one will observe any evidence of large impacts in the recent past (since they would have prevented intelligent observers from evolving). Hence there is a potential bias in the impact record of Earth. [17] Astronomical existential risks might similarly be underestimated due to selection bias, and an anthropic correction has to be introduced. [18]

Volunteer bias

Self-selection bias or a volunteer bias in studies offer further threats to the validity of a study as these participants may have intrinsically different characteristics from the target population of the study. [19] Studies have shown that volunteers tend to come from a higher social standing than from a lower socio-economic background. [20] Furthermore, another study shows that women are more probable to volunteer for studies than males. Volunteer bias is evident throughout the study life-cycle, from recruitment to follow-ups. More generally speaking volunteer response can be put down to individual altruism, a desire for approval, personal relation to the study topic and other reasons. [20] [14] As with most instances mitigation in the case of volunteer bias is an increased sample size. [ citation needed ]

Mitigation

In the general case, selection biases cannot be overcome with statistical analysis of existing data alone, though Heckman correction may be used in special cases. An assessment of the degree of selection bias can be made by examining correlations between exogenous (background) variables and a treatment indicator. However, in regression models, it is correlation between unobserved determinants of the outcome and unobserved determinants of selection into the sample which bias estimates, and this correlation between unobservables cannot be directly assessed by the observed determinants of treatment. [21]

When data are selected for fitting or forecast purposes, a coalitional game can be set up so that a fitting or forecast accuracy function can be defined on all subsets of the data variables.

Selection bias is closely related to:

See also

Related Research Articles

Biostatistics is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.

In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower or higher sampling probability than others. It results in a biased sample of a population in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

<span class="mw-page-title-main">Experiment</span> Scientific procedure performed to validate a hypothesis

An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

<span class="mw-page-title-main">Epidemiology</span> Study of health and disease within a population

Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease conditions in a defined population.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.

Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.

A cohort study is a particular form of longitudinal study that samples a cohort, performing a cross-section at intervals through time. It is a type of panel study where the individuals in the panel share a common characteristic.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

A case–control study is a type of observational study in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute. Case–control studies are often used to identify factors that may contribute to a medical condition by comparing subjects who have the condition with patients who do not have the condition but are otherwise similar. They require fewer resources but provide less evidence for causal inference than a randomized controlled trial. A case–control study is often used to produce an odds ratio. Some statistical methods make it possible to use a case–control study to also estimate relative risk, risk differences, and other quantities.

<span class="mw-page-title-main">Field experiment</span> Experiment conducted outside the laboratory

Field experiments are experiments carried out outside of laboratory settings.

External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times. Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study.

Clinical study design is the formulation of trials and experiments, as well as observational studies in medical, clinical and other types of research involving human beings. The goal of a clinical study is to assess the safety, efficacy, and / or the mechanism of action of an investigational medicinal product (IMP) or procedure, or new drug or device that is in development, but potentially not yet approved by a health authority. It can also be to investigate a drug, device or procedure that has already been approved but is still in need of further investigation, typically with respect to long-term effects or cost-effectiveness.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

<span class="mw-page-title-main">Screening (medicine)</span> Brief medical evaluation to detect unnoticed health problems

Screening, in medicine, is a strategy used to look for as-yet-unrecognised conditions or risk markers. This testing can be applied to individuals or to a whole population without symptoms or signs of the disease being screened.

<span class="mw-page-title-main">Observational study</span> Study with uncontrolled variable of interest

In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.

In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.

<span class="mw-page-title-main">Mendelian randomization</span> Statistical method in genetic epidemiology

In epidemiology, Mendelian randomization is a method using measured variation in genes to examine the causal effect of an exposure on an outcome. Under key assumptions, the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies.

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

References

  1. Dictionary of Cancer Terms → selection bias. Retrieved on September 23, 2009.
  2. Medical Dictionary - 'Sampling Bias' Retrieved on September 23, 2009
  3. TheFreeDictionary → biased sample. Retrieved on 2009-09-23. Site in turn cites: Mosby's Medical Dictionary, 8th edition.
  4. Dictionary of Cancer Terms → Selection Bias. Retrieved on September 23, 2009.
  5. Ards, Sheila; Chung, Chanjin; Myers, Samuel L. (1998). "The effects of sample selection bias on racial differences in child abuse reporting". Child Abuse & Neglect. 22 (2): 103–115. doi: 10.1016/S0145-2134(97)00131-2 . PMID   9504213.
  6. Cortes, Corinna; Mohri, Mehryar; Riley, Michael; Rostamizadeh, Afshin (2008). "Sample Selection Bias Correction Theory". Algorithmic Learning Theory (PDF). Lecture Notes in Computer Science. Vol. 5254. pp. 38–53. arXiv: 0805.2775 . CiteSeerX   10.1.1.144.4478 . doi:10.1007/978-3-540-87987-9_8. ISBN   978-3-540-87986-2. S2CID   842488.
  7. Cortes, Corinna; Mohri, Mehryar (2014). "Domain adaptation and sample bias correction theory and algorithm for regression" (PDF). Theoretical Computer Science. 519: 103–126. CiteSeerX   10.1.1.367.6899 . doi:10.1016/j.tcs.2013.09.027.
  8. Fadem, Barbara (2009). Behavioral Science. Lippincott Williams & Wilkins. p. 262. ISBN   978-0-7817-8257-9.
  9. 1 2 Feinstein AR; Horwitz RI (November 1978). "A critique of the statistical evidence associating estrogens with endometrial cancer". Cancer Res. 38 (11 Pt 2): 4001–5. PMID   698947.
  10. Tamim H; Monfared AA; LeLorier J (March 2007). "Application of lag-time into exposure definitions to control for protopathic bias". Pharmacoepidemiol Drug Saf. 16 (3): 250–8. doi:10.1002/pds.1360. PMID   17245804. S2CID   25648490.
  11. Matthew R. Weir (2005). Hypertension (Key Diseases) (Acp Key Diseases Series). Philadelphia, Pa: American College of Physicians. p. 159. ISBN   978-1-930513-58-7.
  12. Kruskal, William H. (1960). "Some Remarks on Wild Observations". Technometrics. 2 (1): 1–3. doi:10.1080/00401706.1960.10489875.
  13. 1 2 Jüni, P.; Egger, Matthias (2005). "Empirical evidence of attrition bias in clinical trials". International Journal of Epidemiology. 34 (1): 87–88. doi: 10.1093/ije/dyh406 . PMID   15649954.
  14. 1 2 Jordan, Sue; Watkins, Alan; Storey, Mel; Allen, Steven J.; Brooks, Caroline J.; Garaiova, Iveta; Heaven, Martin L.; Jones, Ruth; Plummer, Sue F.; Russell, Ian T.; Thornton, Catherine A. (2013-07-09). "Volunteer Bias in Recruitment, Retention, and Blood Sample Donation in a Randomised Controlled Trial Involving Mothers and Their Children at Six Months and Two Years: A Longitudinal Analysis". PLOS ONE. 8 (7): e67912. Bibcode:2013PLoSO...867912J. doi: 10.1371/journal.pone.0067912 . ISSN   1932-6203. PMC   3706448 . PMID   23874465.
  15. Small, W. P. (1967-05-06). "Lost to Follow-Up". The Lancet. Originally published as Volume 1, Issue 7497. 289 (7497): 997–999. doi:10.1016/S0140-6736(67)92377-X. ISSN   0140-6736. PMID   4164620. S2CID   27683727.
  16. Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge. ISBN   978-0-415-93858-7.
  17. Ćirković, M. M.; Sandberg, A.; Bostrom, N. (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks". Risk Analysis. 30 (10): 1495–506. Bibcode:2010RiskA..30.1495C. doi:10.1111/j.1539-6924.2010.01460.x. PMID   20626690. S2CID   6485564.
  18. Tegmark, M.; Bostrom, N. (2005). "Astrophysics: Is a doomsday catastrophe likely?". Nature. 438 (7069): 754. Bibcode:2005Natur.438..754T. doi: 10.1038/438754a . PMID   16341005. S2CID   4390013.
  19. Tripepi, Giovanni; Jager, Kitty J.; Dekker, Friedo W.; Zoccali, Carmine (2010). "Selection Bias and Information Bias in Clinical Research". Nephron Clinical Practice. 115 (2): c94–c99. doi: 10.1159/000312871 . ISSN   1660-2110. PMID   20407272.
  20. 1 2 "Volunteer bias". Catalog of Bias. 2017-11-17. Retrieved 2020-10-29.
  21. Heckman, J. J. (1979). "Sample Selection Bias as a Specification Error". Econometrica. 47 (1): 153–161. doi:10.2307/1912352. JSTOR   1912352.