External validity

Last updated

External validity is the validity of applying the conclusions of a scientific study outside the context of that study. [1] In other words, it is the extent to which the results of a study can be generalized to and across other situations, people, stimuli, and times. [2] In contrast, internal validity is the validity of conclusions drawn within the context of a particular study. Because general conclusions are almost always a goal in research, external validity is an important property of any study. Mathematical analysis of external validity concerns a determination of whether generalization across heterogeneous populations is feasible, and devising statistical and computational methods that produce valid generalizations. [3]

Contents

Threats

"A threat to external validity is an explanation of how you might be wrong in making a generalization from the findings of a particular study." [4] In most cases, generalizability is limited when the effect of one factor (i.e. the independent variable) depends on other factors. Therefore, all threats to external validity can be described as statistical interactions. [5] Some examples include:

Note that a study's external validity is limited by its internal validity. If a causal inference made within a study is invalid, then generalizations of that inference to other contexts will also be invalid.

Cook and Campbell [6] made the crucial distinction between generalizing to some population and generalizing across subpopulations defined by different levels of some background factor. Lynch has argued that it is almost never possible to generalize to meaningful populations except as a snapshot of history, but it is possible to test the degree to which the effect of some cause on some dependent variable generalizes across subpopulations that vary in some background factor. That requires a test of whether the treatment effect being investigated is moderated by interactions with one or more background factors. [5] [7]

Disarming threats

Whereas enumerating threats to validity may help researchers avoid unwarranted generalizations, many of those threats can be disarmed, or neutralized in a systematic way, so as to enable a valid generalization. Specifically, experimental findings from one population can be "re-processed", or "re-calibrated" so as to circumvent population differences and produce valid generalizations in a second population, where experiments cannot be performed. Pearl and Bareinboim [3] classified generalization problems into two categories: (1) those that lend themselves to valid re-calibration, and (2) those where external validity is theoretically impossible. Using graph-based calculus, [8] they derived a necessary and sufficient condition for a problem instance to enable a valid generalization, and devised algorithms that automatically produce the needed re-calibration, whenever such exists. [9] This reduces the external validity problem to an exercise in graph theory, and has led some philosophers to conclude that the problem is now solved. [10]

An important variant of the external validity problem deals with selection bias, also known as sampling bias—that is, bias created when studies are conducted on non-representative samples of the intended population. For example, if a clinical trial is conducted on college students, an investigator may wish to know whether the results generalize to the entire population, where attributes such as age, education, and income differ substantially from those of a typical student. The graph-based method of Bareinboim and Pearl identifies conditions under which sample selection bias can be circumvented and, when these conditions are met, the method constructs an unbiased estimator of the average causal effect in the entire population. The main difference between generalization from improperly sampled studies and generalization across disparate populations lies in the fact that disparities among populations are usually caused by preexisting factors, such as age or ethnicity, whereas selection bias is often caused by post-treatment conditions, for example, patients dropping out of the study, or patients selected by severity of injury. When selection is governed by post-treatment factors, unconventional re-calibration methods are required to ensure bias-free estimation, and these methods are readily obtained from the problem's graph. [11] [12]

Examples

If age is judged to be a major factor causing treatment effect to vary from individual to individual, then age differences between the sampled students and the general population would lead to a biased estimate of the average treatment effect in that population. Such bias can be corrected though by a simple re-weighing procedure: We take the age-specific effect in the student subpopulation and compute its average using the age distribution in the general population. This would give us an unbiased estimate of the average treatment effect in the population. If, on the other hand, the relevant factor that distinguishes the study sample from the general population is in itself affected by the treatment, then a different re-weighing scheme need be invoked. Calling this factor Z, we again average the z-specific effect of X on Y in the experimental sample, but now we weigh it by the "causal effect" of X on Z. In other words, the new weight is the proportion of units attaining level Z=z had treatment X=x been administered to the entire population. This interventional probability, often written [13] , can sometimes be estimated from observational studies in the general population.

A typical example of this nature occurs when Z is a mediator between the treatment and outcome, For instance, the treatment may be a cholesterol-reducing drug, Z may be cholesterol level, and Y life expectancy. Here, Z is both affected by the treatment and a major factor in determining the outcome, Y. Suppose that subjects selected for the experimental study tend to have higher cholesterol levels than is typical in the general population. To estimate the average effect of the drug on survival in the entire population, we first compute the z-specific treatment effect in the experimental study, and then average it using as a weighting function. The estimate obtained will be bias-free even when Z and Y are confounded—that is, when there is an unmeasured common factor that affects both Z and Y. [14]

The precise conditions ensuring the validity of this and other weighting schemes are formulated in Bareinboim and Pearl, 2016 [14] and Bareinboim et al., 2014. [12]

External, internal, and ecological validity

In many studies and research designs, there may be a trade-off between internal validity and external validity: [15] [16] [17] Attempts to increase internal validity may also limit the generalizability of the findings, and vice versa. This situation has led many researchers call for "ecologically valid" experiments. By that they mean that experimental procedures should resemble "real-world" conditions. They criticize the lack of ecological validity in many laboratory-based studies with a focus on artificially controlled and constricted environments. Some researchers think external validity and ecological validity are closely related in the sense that causal inferences based on ecologically valid research designs often allow for higher degrees of generalizability than those obtained in an artificially produced lab environment. However, this again relates to the distinction between generalizing to some population (closely related to concerns about ecological validity) and generalizing across subpopulations that differ on some background factor. Some findings produced in ecologically valid research settings may hardly be generalizable, and some findings produced in highly controlled settings may claim near-universal external validity. Thus, external and ecological validity are independent—a study may possess external validity but not ecological validity, and vice versa.

Qualitative research

Within the qualitative research paradigm, external validity is replaced by the concept of transferability. Transferability is the ability of research results to transfer to situations with similar parameters, populations and characteristics. [18]

In experiments

It is common for researchers to claim that experiments are by their nature low in external validity. Some claim that many drawbacks can occur when following the experimental method. By the virtue of gaining enough control over the situation so as to randomly assign people to conditions and rule out the effects of extraneous variables, the situation can become somewhat artificial and distant from real life.

There are two kinds of generalizability at issue:

  1. The extent to which we can generalize from the situation constructed by an experimenter to real-life situations (generalizability across situations), [2] and
  2. The extent to which we can generalize from the people who participated in the experiment to people in general (generalizability across people) [2]

However, both of these considerations pertain to Cook and Campbell's concept of generalizing to some target population rather than the arguably more central task of assessing the generalizability of findings from an experiment across subpopulations that differ from the specific situation studied and people who differ from the respondents studied in some meaningful way. [6]

Critics of experiments suggest that external validity could be improved by the use of field settings (or, at a minimum, realistic laboratory settings) and by the use of true probability samples of respondents. However, if one's goal is to understand generalizability across subpopulations that differ in situational or personal background factors, these remedies do not have the efficacy in increasing external validity that is commonly ascribed to them. If background factor X treatment interactions exist of which the researcher is unaware (as seems likely), these research practices can mask a substantial lack of external validity. Dipboye and Flanagan, writing about industrial and organizational psychology, note that the evidence is that findings from one field setting and from one lab setting are equally unlikely to generalize to a second field setting. [19] Thus, field studies are not by their nature high in external validity and laboratory studies are not by their nature low in external validity. It depends in both cases whether the particular treatment effect studied would change with changes in background factors that are held constant in that study. If one's study is "unrealistic" on the level of some background factor that does not interact with the treatments, it has no effect on external validity. It is only if an experiment holds some background factor constant at an unrealistic level and if varying that background factor would have revealed a strong Treatment x Background factor interaction, that external validity is threatened. [5]

Generalizability across situations

Research in psychology experiments attempted in universities is often criticized for being conducted in artificial situations and that it cannot be generalized to real life. [20] [21] To solve this problem, social psychologists attempt to increase the generalizability of their results by making their studies as realistic as possible. As noted above, this is in the hope of generalizing to some specific population. Realism per se does not help the make statements about whether the results would change if the setting were somehow more realistic, or if study participants were placed in a different realistic setting. If only one setting is tested, it is not possible to make statements about generalizability across settings. [5] [7]

However, many authors conflate external validity and realism. There is more than one way that an experiment can be realistic:

  1. The similarity of an experimental situation to events that occur frequently in everyday life—it is clear that many experiments are decidedly unreal.
  2. In many experiments, people are placed in situations they would rarely encounter in everyday life.

This is referred to the extent to which an experiment is similar to real-life situations as the experiment's mundane realism. [20]

It is more important to ensure that a study is high in psychological realism—how similar the psychological processes triggered in an experiment are to psychological processes that occur in everyday life. [22]

Psychological realism is heightened if people find themselves engrossed in a real event. To accomplish this, researchers sometimes tell the participants a cover story—a false description of the study's purpose. If however, the experimenters were to tell the participants the purpose of the experiment then such a procedure would be low in psychological realism. In everyday life, no one knows when emergencies are going to occur and people do not have time to plan responses to them. This means that the kinds of psychological processes triggered would differ widely from those of a real emergency, reducing the psychological realism of the study. [2]

People don't always know why they do what they do, or what they do until it happens. Therefore, describing an experimental situation to participants and then asking them to respond normally will produce responses that may not match the behavior of people who are actually in the same situation. We cannot depend on people's predictions about what they would do in a hypothetical situation; we can only find out what people will really do when we construct a situation that triggers the same psychological processes as occur in the real world.

Generalizability across people

Social psychologists study the way in which people, in general, are susceptible to social influence. Several experiments have documented an interesting, unexpected example of social influence, whereby the mere knowledge that others were present reduced the likelihood that people helped.

The only way to be certain that the results of an experiment represent the behaviour of a particular population is to ensure that participants are randomly selected from that population. Samples in experiments cannot be randomly selected just as they are in surveys because it is impractical and expensive to select random samples for social psychology experiments. It is difficult enough to convince a random sample of people to agree to answer a few questions over the telephone as part of a political poll, and such polls can cost thousands of dollars to conduct. Moreover, even if one somehow was able to recruit a truly random sample, there can be unobserved heterogeneity in the effects of the experimental treatments... A treatment can have a positive effect on some subgroups but a negative effect on others. The effects shown in the treatment averages may not generalize to any subgroup. [5] [23]

Many researchers address this problem by studying basic psychological processes that make people susceptible to social influence, assuming that these processes are so fundamental that they are universally shared. Some social psychologist processes do vary in different cultures and in those cases, diverse samples of people have to be studied. [24]

Replications

The ultimate test of an experiment's external validity is replication — conducting the study over again, generally with different subject populations or in different settings. Researchers will often use different methods, to see if they still get the same results.

When many studies of one problem are conducted, the results can vary. Several studies might find an effect of the number of bystanders on helping behaviour, whereas a few do not. To make sense out of this, there is a statistical technique called meta-analysis that averages the results of two or more studies to see if the effect of an independent variable is reliable. A meta analysis essentially tells us the probability that the findings across the results of many studies are attributable to chance or to the independent variable. If an independent variable is found to have an effect in only one of 20 studies, the meta-analysis will tell you that that one study was an exception and that, on average, the independent variable is not influencing the dependent variable. If an independent variable is having an effect in most of the studies, the meta-analysis is likely to tell us that, on average, it does influence the dependent variable.

There can be reliable phenomena that are not limited to the laboratory. For example, increasing the number of bystanders has been found to inhibit helping behaviour with many kinds of people, including children, university students, and future ministers; [24] in Israel; [25] in small towns and large cities in the U.S.; [26] in a variety of settings, such as psychology laboratories, city streets, and subway trains; [27] and with a variety of types of emergencies, such as seizures, potential fires, fights, and accidents, [28] as well as with less serious events, such as having a flat tire. [29] Many of these replications have been conducted in real-life settings where people could not possibly have known that an experiment was being conducted.

Basic dilemma of the social psychologist

When conducting experiments in psychology, some believe that there is always a trade-off between internal and external validity—

  1. having enough control over the situation to ensure that no extraneous variables are influencing the results and to randomly assign people to conditions, and
  2. ensuring that the results can be generalized to everyday life.

Some researchers believe that a good way to increase external validity is by conducting field experiments. In a field experiment, people's behavior is studied outside the laboratory, in its natural setting. A field experiment is identical in design to a laboratory experiment, except that it is conducted in a real-life setting. The participants in a field experiment are unaware that the events they experience are in fact an experiment. Some claim that the external validity of such an experiment is high because it is taking place in the real world, with real people who are more diverse than a typical university student sample. However, as real-world settings differ dramatically, findings in one real-world setting may or may not generalize to another real-world setting. [19]

Neither internal nor external validity is captured in a single experiment. Social psychologists opt first for internal validity, conducting laboratory experiments in which people are randomly assigned to different conditions and all extraneous variables are controlled. Other social psychologists prefer external validity to control, conducting most of their research in field studies, and many do both. Taken together, both types of studies meet the requirements of the perfect experiment. Through replication, researchers can study a given research question with maximal internal and external validity. [30]

See also

Notes

  1. Mitchell, M. & Jolley, J. (2001). Research Design Explained (4th Ed) New York:Harcourt.
  2. 1 2 3 4 Aronson, E., Wilson, T. D., Akert, R. M., & Fehr, B. (2007). Social psychology. (4 ed.). Toronto, ON: Pearson Education.
  3. 1 2 Pearl, Judea; Bareinboim, Elias (2014). "External validity: From do-calculus to transportability across populations". Statistical Science. 29 (4): 579–595. arXiv: 1503.01603 . doi:10.1214/14-sts486. S2CID   5586184.
  4. Trochim, William M. The Research Methods Knowledge Base, 2nd Edition.
  5. 1 2 3 4 5 Lynch, John (1982). "On the External Validity of Experiments in Consumer Research". Journal of Consumer Research. 9 (3): 225–239. doi:10.1086/208919. JSTOR   2488619.
  6. 1 2 Cook, Thomas D.; Campbell, Donald T. (1979). Quasi-Experimentation: Design & Analysis Issues for Field Settings . Chicago: Rand McNally College Publishing Company. ISBN   978-0395307908.
  7. 1 2 Lynch, John (1999). "Theory and External Validity". Journal of the Academy of Marketing Science. 27 (3): 367–76. CiteSeerX   10.1.1.417.8073 . doi:10.1177/0092070399273007. S2CID   145357923.
  8. Pearl, Judea (1995). "Causal diagrams for empirical research". Biometrika. 82 (4): 669–710. doi:10.1093/biomet/82.4.669.
  9. Bareinboim, Elias; Pearl, Judea (2013). "A general algorithm for deciding transportability of experimental results". Journal of Causal Inference. 1 (1): 107–134. arXiv: 1312.7485 . doi:10.1515/jci-2012-0004. S2CID   13325846.
  10. Marcellesi, Alexandre (December 2015). "External validity: Is there still a problem?". Philosophy of Science. 82 (5): 1308–1317. doi:10.1086/684084. S2CID   125072255.
  11. Pearl, Judea (2015). Generalizing experimental findings. Journal of Causal Inference. Vol. 3, no. 2. pp. 259–266.
  12. 1 2 Bareinboim, Elias; Tian, Jin; Pearl, Judea (2014). Brodley, Carla E.; Stone, Peter (eds.). "Recovering from Selection Bias in Causal and Statistical Inference". Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence: 2410–2416.
  13. Pearl, Judea; Glymour, Madelyn; Jewell, Nicholas P. (2016). Causal Inference in Statistics: A Primer. New York: Wiley.
  14. 1 2 Bareinboim, Elias; Pearl, Judea (2016). "Causal inference and the data-fusion problem". Proceedings of the National Academy of Sciences. 113 (27): 7345–7352. doi: 10.1073/pnas.1510507113 . PMC   4941504 . PMID   27382148.
  15. Campbell, Donald T. (1957). "Factors relevant to the validity of experiments in social settings". Psychological Bulletin. 54 (4): 297–312. doi:10.1037/h0040950. ISSN   1939-1455. PMID   13465924.
  16. Lin, Hause; Werner, Kaitlyn M.; Inzlicht, Michael (2021-02-16). "Promises and Perils of Experimentation: The Mutual-Internal-Validity Problem". Perspectives on Psychological Science. 16 (4): 854–863. doi:10.1177/1745691620974773. ISSN   1745-6916. PMID   33593177. S2CID   231877717.
  17. Schram, Arthur (2005-06-01). "Artificiality: The tension between internal and external validity in economic experiments". Journal of Economic Methodology. 12 (2): 225–237. doi:10.1080/13501780500086081. ISSN   1350-178X. S2CID   145588503.
  18. Lincoln, Y. S.; Guba, E. G. (1986). "But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation". In Williams, D. D. (ed.). Naturalistic Evaluation. New Directions for Program Evaluation. Vol. 30. San Francisco: Jossey-Bass. pp. 73–84. ISBN   0-87589-728-2.
  19. 1 2 Dipboye, Robert L.; Flanagan, Michael F. (1979). "Research Settings in Industrial and Organizational Psychology: Are Findings in the Field More Generalizable than the Laboratory". American Psychologist. 34 (2): 141–150. doi:10.1037/0003-066x.34.2.141.
  20. 1 2 Aronson, E., & Carlsmith, J.M. (1968). Experimentation in social psychology. In G. Lindzey & E. Aronson(Eds.), The Handbook of social psychology. (Vol. 2, pp. 1–79.) Reading, MA: Addison-Wesley.
  21. Yarkoni, Tal (2020-12-21). "The generalizability crisis". Behavioral and Brain Sciences: 1–37. doi:10.1017/S0140525X20001685. ISSN   0140-525X. PMID   33342451.
  22. Aronson, E., Wilson, T.D., & Brewer, m. (1998). Experimental methods. In D. Gilbert, S. Fiske, & G. Lindzey (Eds.), The handbook of social psychology. (4th ed., Vol. 1, pp. 99–142.) New York: Random House.
  23. Hutchinson, J. Wesley; Kamakura, Wagner A.; Lynch, John G. (2000). "Unobserved Heterogeneity as an Alternative Explanation for "Reversal" Effects in Behavioral Research". Journal of Consumer Research. 27 (3): 324–344. doi:10.1086/317588. JSTOR   10.1086/317588. S2CID   16353123.
  24. 1 2 Darley, J.M.; Batson, C.D. (1973). "From Jerusalem to Jericho: A study of situational and dispositional variables in helping behaviour". Journal of Personality and Social Psychology. 27: 100–108. doi:10.1037/h0034449.
  25. Schwartz, S.H.; Gottlieb, A. (1976). "Bystander reactions to a violent theft: Crime in Jerusalem". Journal of Personality and Social Psychology. 34 (6): 1188–1199. doi:10.1037/0022-3514.34.6.1188. PMID   1003323.
  26. Latane, B.; Dabbs, J.M. (1975). "Sex, group size, and helping in three cities". Sociometry. 38 (2): 108–194. doi:10.2307/2786599. JSTOR   2786599.
  27. Harrison, J.A.; Wells, R.B. (1991). "Bystander effects on male helping behaviour: Social comparison and diffusion of responsibility". Representative Research in Social Psychology. 96: 187–192.
  28. Latane, B.; Darley, J.M. (1968). "Group inhibition of bystander intervention". Journal of Personality and Social Psychology. 10 (3): 215–221. doi:10.1037/h0026570. PMID   5704479.
  29. Hurley, D.; Allen, B.P. (1974). "The effect of the number of people present in a nonemergency situation". Journal of Social Psychology. 92: 27–29. doi:10.1080/00224545.1974.9923068.
  30. Latane, B., & Darley, J.M. (1970). The unresponsive bystander: Why doesn't he help? Englewood Cliffs, NJ: Prentice Hall

Related Research Articles

<span class="mw-page-title-main">Design of experiments</span> Design of tasks

The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

<span class="mw-page-title-main">Social psychology</span> Study of social effects on peoples thoughts, feelings, and behaviors

Social psychology is the scientific study of how thoughts, feelings, and behaviors are influenced by the real or imagined presence of other people or by social norms. Social psychologists typically explain human behavior as a result of the relationship between mental states and social situations, studying the social conditions under which thoughts, feelings, and behaviors occur, and how these variables influence social interactions.

<span class="mw-page-title-main">Experiment</span> Scientific procedure performed to validate a hypothesis

An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

Validity is the main extent to which a concept, conclusion or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence described in greater detail below.

<span class="mw-page-title-main">Experimental psychology</span> Application of experimental method to psychological research

Experimental psychology refers to work done by those who apply experimental methods to psychological study and the underlying processes. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation & perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.

Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.

Actor–observer asymmetry is a bias one makes when forming attributions about the behavior of others or themselves. When people judge their own behavior, they are more likely to attribute their actions to the particular situation than to their personality. However, when an observer is explaining the behavior of another person, they are more likely to attribute this behavior to the actors' personality rather than to situational factors.

In the behavioral sciences, ecological validity is often used to refer to the judgment of whether a given study's variables and conclusions are sufficiently relevant to its population. Psychological studies are usually conducted in laboratories though the goal of these studies is to understand human behavior in the real-world. Ideally, an experiment would have generalizable results that predict behavior outside of the lab, thus having more ecological validity. Ecological validity can be considered a commentary on the relative strength of a study's implication(s) for policy, society, culture, etc.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

<span class="mw-page-title-main">Field experiment</span>

Field experiments are experiments carried out outside of laboratory settings.

Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.

The evaluation apprehension theory was proposed by Nickolas B. Cottrell in 1972. He argued that we quickly learn that the social rewards and punishments that we receive from other people are based on their evaluations of us. On this basis, our arousal may be modulated. In other words, performance will be enhanced or impaired only in the presence of persons who can approve or disapprove of our actions.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

<span class="mw-page-title-main">Observational study</span> Study with uncontrolled variable of interest

In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.

Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or "reasonable". This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to "reasonable" conclusions that use: quantitative, statistical, and qualitative data. Fundamentally, two types of errors can occur: type I and type II. Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.

<span class="mw-page-title-main">Quasi-experiment</span> Empirical interventional study

A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.

In the statistical analysis of observational data, propensity score matching (PSM) is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that predict receiving the treatment. PSM attempts to reduce the bias due to confounding variables that could be found in an estimate of the treatment effect obtained from simply comparing outcomes among units that received the treatment versus those that did not. Paul R. Rosenbaum and Donald Rubin introduced the technique in 1983.

Experimental political science is the use of experiments, which may be natural or controlled, to implement the scientific method in political science.

Observational methods in psychological research entail the observation and description of a subject's behavior. Researchers utilizing the observational method can exert varying amounts of control over the environment in which the observation takes place. This makes observational research a sort of middle ground between the highly controlled method of experimental design and the less structured approach of conducting interviews.