Randomized experiment

Last updated
Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT 2010 Statement Flowchart of Phases of Parallel Randomized Trial - Modified from CONSORT 2010.png
Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT 2010 Statement

In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.



In the statistical theory of design of experiments, randomization involves randomly allocating the experimental units across the treatment groups. For example, if an experiment compares a new drug against a standard drug, then the patients should be allocated to either the new drug or to the standard drug control using randomization.

Randomized experimentation is not haphazard. Randomization reduces bias by equalising other factors that have not been explicitly accounted for in the experimental design (according to the law of large numbers). Randomization also produces ignorable designs, which are valuable in model-based statistical inference, especially Bayesian or likelihood-based. In the design of experiments, the simplest design for comparing treatments is the "completely randomized design". Some "restriction on randomization" can occur with blocking and experiments that have hard-to-change factors; additional restrictions on randomization can occur when a full randomization is infeasible or when it is desirable to reduce the variance of estimators of selected effects.

Randomization of treatment in clinical trials pose ethical problems. In some cases, randomization reduces the therapeutic options for both physician and patient, and so randomization requires clinical equipoise regarding the treatments.

Online randomized controlled experiments

Web sites can run randomized controlled experiments [2] to create a feedback loop. [3] Key differences between offline experimentation and online experiments include: [3] [4]


A controlled experiment appears to have been suggested in the Old Testament's Book of Daniel. King Nebuchadnezzar proposed that some Israelites eat "a daily amount of food and wine from the king's table." Daniel preferred a vegetarian diet, but the official was concerned that the king would "see you looking worse than the other young men your age? The king would then have my head because of you." Daniel then proposed the following controlled experiment: "Test your servants for ten days. Give us nothing but vegetables to eat and water to drink. Then compare our appearance with that of the young men who eat the royal food, and treat your servants in accordance with what you see". (Daniel 1, 12– 13). [8] [9]

Randomized experiments were institutionalized in psychology and education in the late eighteen-hundreds, following the invention of randomized experiments by C. S. Peirce. [10] [11] [12] [13] Outside of psychology and education, randomized experiments were popularized by R.A. Fisher in his book Statistical Methods for Research Workers , which also introduced additional principles of experimental design.

Statistical interpretation

The Rubin Causal Model provides a common way to describe a randomized experiment. While the Rubin Causal Model provides a framework for defining the causal parameters (i.e., the effects of a randomized treatment on an outcome), the analysis of experiments can take a number of forms. Most commonly, randomized experiments are analyzed using ANOVA, student's t-test, regression analysis, or a similar statistical test.

Empirical evidence that randomization makes a difference

Empirically differences between randomized and non-randomized studies, [14] and between adequately and inadequately randomized trials have been difficult to detect. [15] [16]

See also

Related Research Articles

Biostatistics are the development and application of statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.

Design of experiments Design of tasks set to uncover answers

The design of experiments is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

Statistical inference

The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.

Randomization is the process of making something random; in various contexts this involves, for example:

Randomized controlled trial Experimental method designed to reduce bias, typically accomplished by randomly allocating subjects to two or more groups, with one being a control group

A randomized controlled trial is a type of scientific experiment or intervention study that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention. The groups are monitored under conditions of the trial design to determine the effectiveness of the experimental intervention, and efficacy is assessed in comparison to the control. There may be more than one treatment group or more than one control group.

Experimental psychology refers to work done by those who apply experimental methods to psychological study and the processes that underlie it. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation & perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.

Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics has been described as "the scientific study of the relation between stimulus and sensation" or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions".

In the design of experiments, treatments are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one treatment group, more than one control group, or both.

Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment using randomization, such as by a chance procedure or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.

Confounding A variable that influences both the dependent variable and independent variable causing a spurious association

In statistics, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations.

This timeline of the history of the scientific method shows an overview of the development of the scientific method up to the present time. For a detailed account, see History of the scientific method.

In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group.

A/B testing Experiment methodology

A/B testing is a user experience research methodology. A/B tests consist of a randomized experiment with two variants, A and B. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B, and determining which of the two variants is more effective.


A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

An N of 1 trial is a clinical trial in which a single patient is the entire trial, a single case study. A trial in which random allocation can be used to determine the order in which an experimental and a control intervention are given to a patient is an N of 1 randomized controlled trial. The order of experimental and control interventions can also be fixed by the researcher.

Oscar Kempthorne was a British statistician and geneticist known for his research on randomization-analysis and the design of experiments, which had wide influence on research in agriculture, genetics, and other areas of science.

Causal analysis is the field of experimental design and statistics pertaining to establishing cause and effect. Typically it involves establishing four elements: correlation, sequence in time, a plausible physical or information-theoretical mechanism for an observed effect to follow from a possible cause, and eliminating the possibility of common and alternative ("special") causes. Such analysis usually involves one or more artificial or natural experiments.


  1. Schulz KF, Altman DG, Moher D; for the CONSORT Group (2010). "CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials". BMJ. 340: c332. doi:10.1136/bmj.c332. PMC   2844940 . PMID   20332509.CS1 maint: multiple names: authors list (link)
  2. Kohavi, Ron; Longbotham, Roger (2015). "Online Controlled Experiments and A/B Tests" (PDF). In Sammut, Claude; Webb, Geoff (eds.). Encyclopedia of Machine Learning and Data Mining. Springer. pp. to appear.
  3. 1 2 3 Kohavi, Ron; Longbotham, Roger; Sommerfield, Dan; Henne, Randal M. (2009). "Controlled experiments on the web: survey and practical guide". Data Mining and Knowledge Discovery. 18 (1): 140–181. doi: 10.1007/s10618-008-0114-1 . ISSN   1384-5810.
  4. Kohavi, Ron; Deng, Alex; Frasca, Brian; Longbotham, Roger; Walker, Toby; Xu Ya (2012). "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained". Proceedings of the 18th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
  5. Kohavi, Ron; Deng Alex; Frasca Brian; Walker Toby; Xu Ya; Nils Pohlmann (2013). Online Controlled Experiments at Large Scale. Proceedings of the 19th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 19. Chicago, Illinois, USA: ACM. pp. 1168–1176. doi:10.1145/2487575.2488217.
  6. Kohavi, Ron; Deng Alex; Longbotham Roger; Xu Ya (2014). Seven Rules of Thumb for Web Site Experimenters. Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 20. New York, New York, USA: ACM. pp. 1857–1866. doi:10.1145/2623330.2623341.
  7. Deng, Alex; Xu, Ya; Kohavi, Ron; Walker, Toby (2013). "Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-Experiment Data". WSDM 2013: Sixth ACM International Conference on Web Search and Data Mining.
  8. Neuhauser, D; Diaz, M (2004). "Daniel: using the Bible to teach quality improvement methods". Quality and Safety in Health Care. 13 (2): 153–155. doi:10.1136/qshc.2003.009480. PMC   1743807 . PMID   15069225.
  9. Angrist, Joshua; Pischke Jörn-Steffen (2014). Mastering 'Metrics: The Path from Cause to Effect. Princeton University Press. p. 31.
  10. Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm
  11. Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis . 79 (3): 427–451. doi:10.1086/354775. JSTOR   234674. MR   1013489.
  12. Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032.
  13. Trudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design" (PDF). Isis . 88 (4): 653–673. doi:10.1086/383850. PMID   9519574.
  14. Anglemyer A, Horvath HT, Bero L (April 2014). "Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials". Cochrane Database Syst Rev. 4 (4): MR000034. doi:10.1002/14651858.MR000034.pub2. PMID   24782322.
  15. Odgaard-Jensen J, Vist G, et al. (April 2011). "Randomisation to protect against selection bias in healthcare trials". Cochrane Database Syst Rev (4): MR000012. doi:10.1002/14651858.MR000012.pub3. PMC   7150228 . PMID   21491415.
  16. Howick J, Mebius A (2014). "In search of justification for the unpredictability paradox". Trials. 15: 480. doi:10.1186/1745-6215-15-480. PMC   4295227 . PMID   25490908.