This article needs additional citations for verification .(March 2010) |
Clinical trials are medical research studies conducted on human subjects. [1] The human subjects are assigned to one or more interventions, and the investigators evaluate the effects of those interventions. [1] [2] The progress and results of clinical trials are analyzed statistically. [3] [4]
Randomized clinical trials analyzed by the intention-to-treat (ITT) approach provide fair comparisons among the treatment groups because it avoids the bias associated with the non-random loss of the participants. [5] [6] The basic ITT principle is that participants in the trials should be analysed in the groups to which they were randomized, regardless of whether they received or adhered to the allocated intervention. [5] However, medical investigators often have difficulties in accepting ITT analysis because of clinical trial issues like missing data or adherence to protocol. [6]
This analysis can be restricted to only the participants who fulfill the protocol in terms of the eligibility, adherence to the intervention, and outcome assessment. This analysis is known as an "on-treatment" or "per protocol" analysis. A per-protocol analysis represents a "best-case scenario" to reveal the effect of the drug being studied. However, by restricting the analysis to a selected patient population, it does not show all effects of the new drug. Further, adherence to treatment may be affected by other factors that influence the outcome. Accordingly, per-protocol effects are at risk of bias, whereas the intent-to-treat estimate is not. [7]
One method of handling missing data is simply to impute, or fill in, values based on existing data. A standard method to do this is the Last-Observation-Carried-Forward (LOCF) method.
The LOCF method allows for the analysis of the data. However, recent research shows that this method gives a biased estimate of the treatment effect and underestimates the variability of the estimated result. [8] [9] As an example, assume that there are 8 weekly assessments after the baseline observation. If a patient drops out of the study after the third week, then this value is "carried forward" and assumed to be his or her score for the 5 missing data points. The assumption is that the patients improve gradually from the start of the study until the end, so that carrying forward an intermediate value is a conservative estimate of how well the person would have done had he or she remained in the study. The advantages to the LOCF approach are that:
However, the National Academy of Sciences, in an advisory report to the Food and Drug Administration on missing data in clinical trials, recommended against the uncritical use of methods like LOCF, stating that "Single imputation methods like last observation carried forward and baseline observation carried forward should not be used as the primary approach to the treatment of missing data unless the assumptions that underlie them are scientifically justified." [10]
The National Academy of Sciences advisory panel instead recommended methods that provide valid type I error rates under explicitly stated assumptions taking missing data status into account, and the use of multiple imputation methods based on all the data available in the model. It recommended more widespread use of Bootstrap and Generalized estimating equation methods whenever the assumptions underlying them, such as Missing at Random for GEE methods, can be justified. It advised collecting auxiliary data believed to be associated with dropouts to provide more robust and reliable models, collecting information about reason for drop-out; and, if possible, following up on drop-outs and obtaining efficacy outcome data. Finally, it recommended sensitivity analyses as part of clinical trial reporting to assess the sensitivity of the results to the assumptions about the missing data mechanism. [10]
While the methods recommended by the National Academy of Science report are more recently developed, more robust, and will work under a wider variety of conditions than single-imputation methods like LOCF, no known method for handling missing data is valid under all conditions. As the 1998 International Conference on Harmonization E9 Guidance on Statisticial Principles for Clinical Trials noted, "Unfortunately, no universally applicable methods of handling missing values can be recommended." [11] Expert statistical and medical judgment must select the method most appropriate to the particularly trial conditions of the available imperfect techniques, depending on the particular trial's goals, endpoints, statistical methods, and context.
Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience.
A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.
Clinical trials are prospective biomedical or behavioral research studies on human participants designed to answer specific questions about biomedical or behavioral interventions, including new treatments and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy. They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted.
In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.
Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.
In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.
Field experiments are experiments carried out outside of laboratory settings.
In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.
The number needed to treat (NNT) or number needed to treat for an additional beneficial outcome (NNTB) is an epidemiological measure used in communicating the effectiveness of a health-care intervention, typically a treatment with medication. The NNT is the average number of patients who need to be treated to prevent one additional bad outcome. It is defined as the inverse of the absolute risk reduction, and computed as , where is the incidence in the control (unexposed) group, and is the incidence in the treated (exposed) group. This calculation implicitly assumes monotonicity, that is, no individual can be harmed by treatment. The modern approach, based on counterfactual conditionals, relaxes this assumption and yields bounds on NNT.
A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic, then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.
In medicine an intention-to-treat (ITT) analysis of the results of a randomized controlled trial is based on the initial treatment assignment and not on the treatment eventually received. ITT analysis is intended to avoid various misleading artifacts that can arise in intervention research such as non-random attrition of participants from the study or crossover. ITT is also simpler than other forms of study design and analysis, because it does not require observation of compliance status for units assigned to different treatments or incorporation of compliance into the analysis. Although ITT analysis is widely employed in published clinical trials, it can be incorrectly described and there are some issues with its application. Furthermore, there is no consensus on how to carry out an ITT analysis in the presence of missing outcome data.
In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.
In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects. In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.
In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data.
In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.
Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, "a comparison between what actually happened and what would have happened in the absence of the intervention." Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.
Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.
In medicine, a stepped-wedge trial is a type of randomised controlled trial (RCT). An RCT is a scientific experiment that is designed to reduce bias when testing a new medical treatment, a social intervention, or another testable hypothesis.
Roderick Joseph Alexander Little is an academic statistician, whose main research contributions lie in the statistical analysis of data with missing values and the analysis of complex sample survey data. Little is Richard D. Remington Distinguished University Professor of Biostatistics in the Department of Biostatistics at the University of Michigan, where he also holds academic appointments in the Department of Statistics and the Institute for Social Research.
In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting.
{{cite book}}
: CS1 maint: DOI inactive as of September 2024 (link)