Single-subject design

Last updated

In design of experiments, single-subject curriculum or single-case research design is a research design most often used in applied fields of psychology, education, and human behaviour in which the subject serves as his/her own control, rather than using another individual/group. Researchers use single-subject design because these designs are sensitive to individual organism differences vs group designs which are sensitive to averages of groups. The logic behind single subject designs is 1) Prediction, 2) Verification, and 3) Replication. The baseline data predicts behaviour by affirming the consequent. Verification refers to demonstrating that the baseline responding would have continued had no intervention been implemented. Replication occurs when a previously observed behaviour changed is reproduced. [1] There can be large numbers of subjects in a research study using single-subject design, however—because the subject serves as their own control, this is still a single-subject design. [2] These designs are used primarily to evaluate the effect of a variety of interventions in applied research. [3]

Contents

Design standards

Effect size

Although there are no standards on the specific statistics required for effect size calculation, it is best practice to include an effect size estimate. [4]

Reporting standards

When reporting on findings obtained through single-subject designs, specific guidelines are used for standardization and to ensure completeness and transparency: [5]

Types of single-subject designs

Reversal design

Reversal design involves repeated measurement of behaviour in a given setting during three consecutive phases (ABA)- baseline, intervention, and return to baseline. Variations include extending the ABA design with repeated reversals (ABAB) and including multiple treatments (ABCABC). AB designs, or reversal designs with no return to baseline, are not considered experimental. Functional control cannot be determined in AB designs because there is no replication. [1]

Alternating treatments design

Alternating treatments design (ATD) compares the effects of two or more independent variables on the dependent variable. Variations include a no-treatment control condition and a final best-treatment verification phase. [1]

Multiple baseline design

Multiple baseline design involves simultaneous baseline measurement begins on two or more behaviours, settings, or participants. The IV is implemented on one behaviour, setting, or participant, while baseline continues for all others. Variations include the multiple probe design and delayed multiple baseline design. [1]

Changing criterion design

Changing criterion designs are used to evaluate the effects of an IV on the gradual improvement of a behavior already in the participant's repertoire. [1]

Interpretation of data

In order to determine the effect of the independent variable on the dependent variable, the researcher will graph the data collected and visually inspect the differences between phases. If there is a clear distinction between baseline and intervention, and then the data returns to the same trends/level during reversal, a functional relation between the variables is inferred. [6] Sometimes, visual inspection of the data demonstrates results that statistical tests fail to find. [7] [8] Features assessed during visual analysis include: [9]

Limitations

Research designs are traditionally preplanned so that most of the details about to whom and when the intervention will be introduced are decided prior to the beginning of the study. However, in single-subject designs, these decisions are often made as the data are collected. [10] In addition, there are no widely agreed-upon rules for altering phases, so—this could lead to conflicting ideas as to how a research experiment should be conducted in single-subject design.

The major criticism of single-subject designs are:

History

Historically, single-subject designs have been closely tied to the experimental analysis of behavior and applied behavior analysis. [11]

See also

Related Research Articles

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.

<span class="mw-page-title-main">Design of experiments</span> Design of tasks

The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

<span class="mw-page-title-main">Field experiment</span>

Field experiments are experiments carried out outside of laboratory settings.

<span class="mw-page-title-main">Applied behavior analysis</span> The application of respondent and operant conditioning to analyze and change behavior

Applied behavior analysis (ABA), also called behavioral engineering, is a psychological intervention that applies empirical approaches based upon the principles of respondent and operant conditioning to change behavior of social significance. It is the applied form of behavior analysis; the other two forms are radical behaviorism and the experimental analysis of behavior.

Behaviour therapy or behavioural psychotherapy is a broad term referring to clinical psychotherapy that uses techniques derived from behaviourism and/or cognitive psychology. It looks at specific, learned behaviours and how the environment, or other people's mental states, influences those behaviours, and consists of techniques based on behaviorism's theory of learning: respondent or operant conditioning. Behaviourists who practice these techniques are either behaviour analysts or cognitive-behavioural therapists. They tend to look for treatment outcomes that are objectively measurable. Behaviour therapy does not involve one specific method, but it has a wide range of techniques that can be used to treat a person's psychological problems.

External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can be generalized to and across other situations, people, stimuli, and times. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study. Because general conclusions are almost always a goal in research, external validity is an important property of any study. Mathematical analysis of external validity concerns a determination of whether generalization across heterogeneous populations is feasible, and devising statistical and computational methods that produce valid generalizations.

<span class="mw-page-title-main">Transtheoretical model</span> Integrative theory of therapy

The transtheoretical model of behavior change is an integrative theory of therapy that assesses an individual's readiness to act on a new healthier behavior, and provides strategies, or processes of change to guide the individual. The model is composed of constructs such as: stages of change, processes of change, levels of change, self-efficacy, and decisional balance.

Behavior modification is an early approach that used respondent and operant conditioning to change behavior. Based on methodological behaviorism, overt behavior was modified with consequences, including positive and negative reinforcement contingencies to increase desirable behavior, or administering positive and negative punishment and/or extinction to reduce problematic behavior. It also used Flooding desensitization to combat phobias.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

Single-subject research is a group of research methods that are used extensively in the experimental analysis of behavior and applied behavior analysis with both human and non-human participants. This research strategy focuses on one participant and tracks their progress in the research topic over a period of time. Single-subject research allows researchers to track changes in an individual over a large stretch of time instead of observing different people at different stages. This type of research can provide critical data in several fields, specifically psychology. It is most commonly used in experimental and applied analysis of behaviors. This research has been heavily debated over the years. Some believe that this research method is not effective at all while others praise the data that can be collected from it. Principal methods in this type of research are: A-B-A-B designs, Multi-element designs, Multiple Baseline designs, Repeated acquisition designs, Brief experimental designs and Combined designs.

<span class="mw-page-title-main">Quasi-experiment</span> Empirical interventional study

A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.

Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, "a comparison between what actually happened and what would have happened in the absence of the intervention." Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

Clinical trials are medical research studies conducted on human subjects. The human subjects are assigned to one or more interventions, and the investigators evaluate the effects of those interventions. The progress and results of clinical trials are analyzed statistically.

A glossary of terms used in experimental research.

A glossary of terms used in clinical research.

An N of 1 trial (N=1) is a clinical trial in which a single patient is the entire trial, a single case study. A trial in which random allocation can be used to determine the order in which an experimental and a control intervention are given to a patient is an N of 1 randomized controlled trial. The order of experimental and control interventions can also be fixed by the researcher.

A multiple baseline design is used in medical, psychological, and biological research. The multiple baseline design was first reported in 1960 as used in basic operant research. It was applied in the late 1960s to human experiments in response to practical and ethical issues that arose in withdrawing apparently successful treatments from human subjects. In it two or more behaviors, people or settings are plotted in a staggered graph where a change is made to one, but not the other two, and then to the second, but not the third behavior, person or setting. Differential changes that occur to each behavior, person or in each setting help to strengthen what is essentially an AB design with its problematic competing hypotheses.

A consecutive controlled case series (CCCS) is a clinical study that involves aggregating multiple cases consecutively encountered wherein an experimentally controlled single-case experimental design was employed with each case. The CCCS design differs from the consecutive case series, because the latter reports on multiple cases where experimental control was not demonstrated, usually because a pre-post non experimental design was used. In contrast, a CCCS includes only cases where the intervention was evaluated using single-case experimental designs, such as a reversal design, where experimental control is demonstrated through the replication of treatment effects for each individual participant. Thus, the CCCS design has better internal validity than a consecutive case series. The CCCS design also address some concerns about the external validity or generality of findings of small-n single-case experimental design studies because it explicitly includes all cases encountered, regardless of outcome. By including all cases, any bias favoring a particular outcome in controlled for, resulting in stronger external validity relative to studies describing fewer cases that were not consecutively encountered. Moreover, when a large number of individuals are included in the series, this provides opportunities to identify variables that may predict treatment outcomes. Consecutive controlled case-series studies examining behavior analytic interventions of late have examined functional communication training. schedule thinning during functional communication training, and functional analysis and treatment using caregivers.

References

  1. 1 2 3 4 5 Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Columbus, OH: Merrill Prentice Hall.
  2. Cooper, J.O.; Heron, T.E.; Heward, W.L. (2007). Applied Behavior Analysis (2nd ed.). Prentice Hall. ISBN   978-0-13-142113-4.
  3. Kazdin p. 191
  4. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_scd.pdf.
  5. Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W., . . . Vohra, S. (2016). The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and elaboration. Archives of Scientific Psychology, 4(1), 10-31. doi : 10.1037/arc0000027
  6. Backman, C.L. & Harris, S.R. (1999). Case Studies, Single-Subject Research, and N of 1 Randomized Trials. Comparisons and Contrasts. American Journal of Physical Medicine & Rehabilitation, 78(2), 170–6.
  7. Bobrovitz, C.D. & Ottenbacher, K.J. (1998). Comparison of Visual Inspection and Statistical Analysis of Single-Subject Data in Rehabilitation Research. Journal of Engineering and Applied Science, 77(2), 94–102.
  8. Nishith, P.; Hearst, D.E.; Mueser, K.T. & Foa, E. (1995). PTSD and Major Depression: Methodological and Treatment Considerations in a Single-Case Design. Behavior Therapy, 26(2), 297–9
  9. Horner, Robert, Carr, Edward, Halle, Jim, Mcgee, Gail, SL, Odom & Wolery, Mark. (2005). The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education. Exceptional Children. 71. 165-179. 10.1177/001440290507100203.
  10. Kazdin, p. 284
  11. Kazdin, p. 291

Further reading