Stepped-wedge trial

Last updated

In medicine, a stepped-wedge trial (or SWT) is a type of randomised controlled trial (RCT). An RCT is a scientific experiment that is designed to reduce bias when testing a new medical treatment, a social intervention, or another testable hypothesis.

Contents

In a traditional RCT, the researcher randomly divides the experiment participants into two groups at the same time:

In a SWT, a logistic constraint typically prevents the simultaneous treatment of some participants, and instead, all or most participants receive the treatment in waves or "steps".

For instance, a researcher wants to measure whether teaching college students how to make several meals increased their propensity to cook at home instead of eating out.

The term "stepped wedge" was coined by The Gambia Hepatitis Intervention Study due to the stepped-wedge shape that is apparent from a schematic illustration of the design. [1] [2] The crossover is in one direction, typically from control to intervention, with the intervention not removed once implemented. The stepped-wedge design can be used for individually randomized trials, [3] [4] i.e., trials where each individual is treated sequentially, but is more commonly used as a cluster randomized trial (CRT). [5]

Experiment design

The stepped-wedge design involves the collection of observations during a baseline period in which no clusters are exposed to the intervention. Following this, at regular intervals, or steps, a cluster (or group of clusters) is randomized to receive the intervention [5] [6] and all participants are once again measured. [7] This process continues until all clusters have received the intervention. Finally, one more measurement is made after all clusters have received the intervention. [8]

Appropriateness

Hargreaves and colleagues offer a series of five questions that researchers should answer to decide whether SWT is indeed the optimal design, and how to proceed in every step of the study. [9] Specifically, researchers should be able to identify:

The reasons SWT is the preferred design
If measuring a treatment effect is the primary goal of research, SWT may not be the optimal design. SWTs are appropriate when the research focus is on the effectiveness of the treatment rather than on its mere existence. Overall, if the study is pragmatic (i.e. seeks primarily to implement a certain policy), logistical and other practical concerns are considered to be the best reasons to turn to a stepped wedge design. Also, if the treatment is expected to be beneficial, and it would not be ethical to deny it to some participants, then SWT allows all participants to have the treatment while still allowing a comparison with a control group. By the end of the study, all participants will have the opportunity to try the treatment. Note there may still be ethical issues raised by delaying access to the treatment for some participants.[ citation needed ]
Which SWT design is more suitable
SWTs can feature three main designs employing a closed cohort, an open cohort, and a continuous recruitment with short exposure. [10] :In the closed cohort, all subjects participate in the experiment from beginning to end. All the outcomes are measured repeatedly at fixed time points which may or may not be related to each step.[ citation needed ]
In the open cohort design, outcomes are measured similarly to the former design, but new subjects can enter the study, and some participants from an early stage can leave before the completion. Only a part of the subjects are exposed from the start, and more are gradually exposed in subsequent steps. Thus, the time of exposure varies for each subject.
In continuous recruitment design with short exposure, very few or no subjects participate in the beginning of the experiment but more become eligible, and are exposed to short intervention gradually. In this design, each subject is assigned to either the treatment or the control condition. Since participants are assigned to either the treatment or the control group, the risk of carry-over effects, which may be a challenge for closed and open cohort designs, is minimal.[ citation needed ]
Which analysis strategy is appropriate
Linear Mixed Models (LMM), Generalized Linear Mixed Models (GLMM), and Generalized Estimating Equations (GEE) are the principal estimators recommended for analyzing the results. While LMM offers higher power than GLMM and GEE, it can be inefficient if the size of clusters vary, and the response is not continuous and normally distributed. If any of those assumptions are violated, GLMM and GEE are preferred.[ citation needed ]
How big the sample should be
Power analysis and sample size calculation are available. Generally, SWTs require smaller sample size to detect effects since they leverage both between and within-cluster comparisons. [11] [12]
Best practices for reporting the design and results of the trial
Reporting the design, sample profile, and results can be challenging, since no Consolidated Standards Of Reporting Trials (CONSORT) have been designated for SWTs. However, some studies have provided both formalizations and flow charts that help reporting results, and sustaining a balanced sample across the waves. [13]

Model

While there are several other potential methods for modeling outcomes in an SWT, [14] the work of Hussey and Hughes [7] "first described methods to determine statistical power available when using a stepped wedge design." [14] What follows is their design.

Suppose there are samples divided into clusters. At each time point , preferably equally spaced in actual time, some number of clusters are treated. Let be if cluster has been treated at time and otherwise. In particular, note that if then .

For each participant in cluster , measure the outcome to be studied at time . Note that the notation allows for clustering by including in the subscript of , , , and . We model these outcomes as: where:

This model can be viewed as a Hierarchical linear model where at the lowest level where is the mean of a given cluster at a given time, and at the cluster level, each cluster mean .

Estimate of variance

The design effect (estimate of unit variance) of a stepped wedge design is given by the formula: [11]

where:

To calculate the sample size it is needed to apply the simple formula: [11]

where:

Note that increasing either k, t, or b will result to decreasing the required sample size for an SWT.

Further, the required cluster c size is given by: [11]

To calculate how many clusters cs need to switch from the control to the treatment condition, the following formula is available: [11]

If c and cs are not integers, they need to be rounded to the next larger integer and distributed as evenly as possible among k.

Advantages

Stepped wedge design features many comparative advantages to traditional RCTs (Randomized controlled trials).

Disadvantages

SWT may suffer from certain drawbacks.

Ongoing work

The number of studies using the design have been on the increase. In 2015, a thematic series was published in the journal Trials. [19] In 2016, the first international conference dedicated to the topic was held at the University of York. [20] [21]

Related Research Articles

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes and/or integrates data from multiple sources

Meta-analysis is a method of synthesis of quantitative data from multiple independent studies addressing a common research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. By combining these effect sizes the statistical power is improved and can resolve uncertainties or discrepancies found in individual studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review.

<span class="mw-page-title-main">Randomized controlled trial</span> Form of scientific experiment

A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.

A cohort study is a particular form of longitudinal study that samples a cohort, performing a cross-section at intervals through time. It is a type of panel study where the individuals in the panel share a common characteristic.

<span class="mw-page-title-main">Field experiment</span> Experiment conducted outside the laboratory

Field experiments are experiments carried out outside of laboratory settings.

The number needed to treat (NNT) or number needed to treat for an additional beneficial outcome (NNTB) is an epidemiological measure used in communicating the effectiveness of a health-care intervention, typically a treatment with medication. The NNT is the average number of patients who need to be treated to prevent one additional bad outcome. It is defined as the inverse of the absolute risk reduction, and computed as , where is the incidence in the control (unexposed) group, and is the incidence in the treated (exposed) group. This calculation implicitly assumes monotonicity, that is, no individual can be harmed by treatment. The modern approach, based on counterfactual conditionals, relaxes this assumption and yields bounds on NNT.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

A pilot experiment, pilot study, pilot test or pilot project is a small-scale preliminary study conducted to evaluate feasibility, duration, cost, adverse events, and improve upon the study design prior to performance of a full-scale research project.

A hierarchy of evidence, comprising levels of evidence (LOEs), that is, evidence levels (ELs), is a heuristic used to rank the relative strength of results obtained from experimental research, especially medical research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs). Systematic reviews of completed, high-quality randomized controlled trials – such as those published by the Cochrane Collaboration – rank the same as systematic review of completed high-quality observational studies in regard to the study of side effects. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

In medicine an intention-to-treat (ITT) analysis of the results of a randomized controlled trial is based on the initial treatment assignment and not on the treatment eventually received. ITT analysis is intended to avoid various misleading artifacts that can arise in intervention research such as non-random attrition of participants from the study or crossover. ITT is also simpler than other forms of study design and analysis, because it does not require observation of compliance status for units assigned to different treatments or incorporation of compliance into the analysis. Although ITT analysis is widely employed in published clinical trials, it can be incorrectly described and there are some issues with its application. Furthermore, there is no consensus on how to carry out an ITT analysis in the presence of missing outcome data.

In a randomized experiment, allocation concealment hides the sorting of trial participants into treatment groups so that this knowledge cannot be exploited. Adequate allocation concealment serves to prevent study participants from influencing treatment allocations for subjects. Studies with poor allocation concealment are prone to selection bias.

In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects. In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.

<span class="mw-page-title-main">Mendelian randomization</span> Statistical method in genetic epidemiology

In epidemiology, Mendelian randomization is a method using measured variation in genes to examine the causal effect of an exposure on an outcome. Under key assumptions, the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies.

CapOpus is the name of a randomized controlled trial (RCT) running in Denmark at Psychiatric Center Bispebjerg and physically located at Bispebjerg Hospital in Copenhagen. It is an intervention aimed at reducing cannabis consumption in young persons with comorbid severe mental illness such as schizophrenia or schizotypal personality disorder, and cannabis dependency. It is run by psychiatrist Merete Nordentoft.

The average treatment effect (ATE) is a measure used to compare treatments in randomized experiments, evaluation of policy interventions, and medical trials. The ATE measures the difference in mean (average) outcomes between units assigned to the treatment and units assigned to the control. In a randomized trial, the average treatment effect can be estimated from a sample using a comparison in mean outcomes for treated and untreated units. However, the ATE is generally understood as a causal parameter that a researcher desires to know, defined without reference to the study design or estimation procedure. Both observational studies and experimental study designs with random assignment may enable one to estimate an ATE in a variety of ways.

Management of ME/CFS focuses on symptoms management, as no treatments that address the root cause of the illness are available. Pacing, or regulating one's activities to avoid triggering worse symptoms, is the most common management strategy for post-exertional malaise. Clinical management varies widely, with many patients receiving combinations of therapies. The prognosis of ME/CFS is poor, with recovery considered “rare”.

Early intervention in psychosis is a clinical approach to those experiencing symptoms of psychosis for the first time. It forms part of a new prevention paradigm for psychiatry and is leading to reform of mental health services, especially in the United Kingdom and Australia.

A cluster-randomised controlled trial is a type of randomised controlled trial in which groups of subjects are randomised. Cluster randomised controlled trials are also known as cluster-randomised trials, group-randomised trials, and place-randomized trials. Cluster-randomised controlled trials are used when there is a strong reason for randomising treatment and control groups over randomising participants.

<span class="mw-page-title-main">Adaptive design (medicine)</span> Concept in medicine referring to design of clinical trials

In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting.

DIALOG+ is a technology-supported intervention used to structure communication between a patient and a mental health care provider. The intervention consists of the patient providing ratings of their subjective quality of life (SQOL) on 11 areas of the DIALOG scale. The eight life domains consist of: mental health, physical health, job situation, accommodation, leisure activities, relationships with partner/family, friendships and personal safety. The three treatment aspects deal with medication, practical help and meetings with mental health professionals. The 11 items together make up the DIALOG scale.

A platform trial is a type of prospective, disease-focused, adaptive, randomized clinical trial (RCT) that compares multiple, simultaneous and possibly differently-timed interventions against a single, constant control group. As a disease-focused trial design, platform trials attempt to answer the question "which therapy will best treat this disease". Platform trials are unique in their utilization of both: a common control group and their opportunity to alter the therapies it investigates during its active enrollment phase. Platform trials commonly take advantage of Bayesian statistics, but may incorporate elements of frequentist statistics and/or machine learning.

References

  1. Wang, Mei; Jin, Yanling; Hu, Zheng Jing; Thabane, Alex; Dennis, Brittany; Gajic-Veljanoski, Olga; Paul, James; Thabane, Lehana (1 December 2017). "The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: A systematic survey of the literature". Contemporary Clinical Trials Communications. 8: 1–10. doi:10.1016/j.conctc.2017.08.009. ISSN   2451-8654. PMC   5898470 . PMID   29696191.
  2. The Gambia Hepatitis Study Group (November 1987). "The Gambia Hepatitis Intervention Study". Cancer Research. 47 (21): 5782–7. PMID   2822233.
  3. Ratanawongsa N, Handley MA, Quan J, Sarkar U, Pfeifer K, Soria C, Schillinger D (January 2012). "Quasi-experimental trial of diabetes Self-Management Automated and Real-Time Telephonic Support (SMARTSteps) in a Medicaid managed care plan: study protocol". BMC Health Services Research. 12: 22. doi: 10.1186/1472-6963-12-22 . PMC   3276419 . PMID   22280514.
  4. Løhaugen GC, Beneventi H, Andersen GL, Sundberg C, Østgård HF, Bakkan E, Walther G, Vik T, Skranes J (July 2014). "Do children with cerebral palsy benefit from computerized working memory training? Study protocol for a randomized controlled trial". Trials. 15: 269. doi: 10.1186/1745-6215-15-269 . PMC   4226979 . PMID   24998242.
  5. 1 2 3 4 Brown CA, Lilford RJ (November 2006). "The stepped wedge trial design: a systematic review". BMC Medical Research Methodology. 6: 54. doi: 10.1186/1471-2288-6-54 . PMC   1636652 . PMID   17092344.
  6. 1 2 Mdege ND, Man MS, Taylor Nee Brown CA, Torgerson DJ (September 2011). "Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation". Journal of Clinical Epidemiology. 64 (9): 936–48. doi:10.1016/j.jclinepi.2010.12.003. PMID   21411284.
  7. 1 2 3 Hussey MA, Hughes JP (February 2007). "Design and analysis of stepped wedge cluster randomized trials". Contemporary Clinical Trials. 28 (2): 182–91. doi: 10.1016/j.cct.2006.05.007 . PMID   16829207.
  8. Mulfinger N, Sander A, Stuber F, Brinster R, Junne F, Limprecht R, et al. (December 2019). "Cluster-randomised trial evaluating a complex intervention to improve mental health and well-being of employees working in hospital - a protocol for the SEEGEN trial". BMC Public Health. 19 (1): 1694. doi: 10.1186/s12889-019-7909-4 . PMC   6918673 . PMID   31847898.
  9. Hargreaves JR, Copas AJ, Beard E, Osrin D, Lewis JJ, Davey C, Thompson JA, Baio G, Fielding KL, Prost A (August 2015). "Five questions to consider before conducting a stepped wedge trial". Trials. 16 (1): 350. doi: 10.1186/s13063-015-0841-8 . PMC   4538743 . PMID   26279013.
  10. Copas AJ, Lewis JJ, Thompson JA, Davey C, Baio G, Hargreaves JR (August 2015). "Designing a stepped wedge trial: three main designs, carry-over effects and randomisation approaches". Trials. 16 (1): 352. doi: 10.1186/s13063-015-0842-7 . PMC   4538756 . PMID   26279154.
  11. 1 2 3 4 5 6 7 8 9 10 11 12 Woertman W, de Hoop E, Moerbeek M, Zuidema SU, Gerritsen DL, Teerenstra S (July 2013). "Stepped wedge designs could reduce the required sample size in cluster randomized trials". Journal of Clinical Epidemiology. 66 (7): 752–8. doi: 10.1016/j.jclinepi.2013.01.009 . hdl: 2066/117688 . PMID   23523551.
  12. Baio G, Copas A, Ambler G, Hargreaves J, Beard E, Omar RZ (August 2015). "Sample size calculation for a stepped wedge trial". Trials. 16 (1): 354. doi: 10.1186/s13063-015-0840-9 . PMC   4538764 . PMID   26282553.
  13. Gruber JS, Reygadas F, Arnold BF, Ray I, Nelson K, Colford JM (August 2013). "A stepped wedge, cluster-randomized trial of a household UV-disinfection and safe storage drinking water intervention in rural Baja California Sur, Mexico". The American Journal of Tropical Medicine and Hygiene. 89 (2): 238–45. doi:10.4269/ajtmh.13-0017. PMC   3741243 . PMID   23732255.
  14. 1 2 3 4 Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ (February 2015). "The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting". BMJ. 350: h391. doi: 10.1136/bmj.h391 . PMID   25662947.
  15. Keriel-Gascou M, Buchet-Poyau K, Rabilloud M, Duclos A, Colin C (July 2014). "A stepped wedge cluster randomized trial is preferable for assessing complex health interventions". Journal of Clinical Epidemiology. 67 (7): 831–3. doi: 10.1016/j.jclinepi.2014.02.016 . PMID   24774471.
  16. 1 2 McKenzie D (November 2012). "Beyond baseline and follow-up: The case for more T in experiments Author links open overlay panel" (PDF). Journal of Development Economics. 99 (2): 210–221. doi:10.1016/j.jdeveco.2012.01.002. hdl: 10986/3403 . S2CID   15923427.
  17. Van den Heuvel ER, Zwanenburg RJ, Van Ravenswaaij-Arts CM (April 2017). "A stepped wedge design for testing an effect of intranasal insulin on cognitive development of children with Phelan-McDermid syndrome: A comparison of different designs". Statistical Methods in Medical Research. 26 (2): 766–775. doi:10.1177/0962280214558864. PMID   25411323. S2CID   4703466.
  18. Hemming K, Lilford R, Girling AJ (January 2015). "Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs". Statistics in Medicine. 34 (2): 181–96. doi:10.1002/sim.6325. PMC   4286109 . PMID   25346484.
  19. Torgerson D (2015). "Stepped Wedge Randomized Controlled Trials". Trials. 16: 350. Retrieved 17 February 2017.
  20. "First International Conference on Stepped Wedge Trial Design". University of York.
  21. Kanaan M, et al. (July 2016). "Proceedings of the First International Conference on Stepped Wedge Trial Design : York, UK, 10 March 2016". Trials. 17 (Suppl 1): 311. doi: 10.1186/s13063-016-1436-8 . PMC   4959349 . PMID   27454562.