In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. [1] [2] Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. [3] In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias. [2]
Similarly, a protocol may refer to the procedural methods of health organizations, commercial laboratories, manufacturing plants, etc. to ensure their activities (e.g., blood testing at a hospital, testing of certified reference materials at a calibration laboratory, and manufacturing of transmission gears at a facility) are consistent to a specific standard, encouraging safe use and accurate results. [4] [5] [6]
Finally, in the field of social science, a protocol may also refer to a "descriptive record" of observed events [7] [8] or a "sequence of behavior" [9] of one or more organisms, recorded during or immediately after an activity (e.g., how an infant reacts to certain stimuli or how gorillas behave in natural habitat) to better identify "consistent patterns and cause-effect relationships." [7] [10] These protocols may take the form of hand-written journals or electronically documented media, including video and audio capture. [7] [10]
Various fields of science, such as environmental science and clinical research, require the coordinated, standardized work of many participants. Additionally, any associated laboratory testing and experiment must be done in a way that is both ethically sound and results can be replicated by others using the same methods and equipment. As such, rigorous and vetted testing and experimental protocols are required. In fact, such predefined protocols are an essential component of Good Laboratory Practice (GLP) [11] and Good Clinical Practice (GCP) [12] [13] regulations. Protocols written for use by a specific laboratory may incorporate or reference standard operating procedures (SOP) governing general practices required by the laboratory. A protocol may also reference applicable laws and regulations that are applicable to the procedures described. Formal protocols typically require approval by one or more individuals—including for example a laboratory directory, study director, [11] and/or independent ethics committee [12] : 12 —before they are implemented for general use. Clearly defined protocols are also required by research funded by the National Institutes of Health. [14]
In a clinical trial, the protocol is carefully designed to safeguard the health of the participants as well as answer specific research questions. A protocol describes what types of people may participate in the trial; the schedule of tests, procedures, medications, and dosages; and the length of the study. While in a clinical trial, participants following a protocol are seen regularly by research staff to monitor their health and to determine the safety and effectiveness of their treatment. [11] [12] Since 1996, clinical trials conducted are widely expected to conform to and report the information called for in the CONSORT Statement, which provides a framework for designing and reporting protocols. [15] Though tailored to health and medicine, ideas in the CONSORT statement are broadly applicable to other fields where experimental research is used.
Protocols will often address: [2] [11] [12]
Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency, and consistency between methodology and protocol. [17]
A protocol may require blinding to avoid bias. [16] [18] A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains. [19]
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments, and must be measured and reported. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding. [20]
An experimenter may have latitude defining procedures for blinding and controls but may be required to justify those choices if the results are published or submitted to a regulatory agency. When it is known during the experiment which data was negative there are often reasons to rationalize why that data shouldn't be included. Positive data are rarely rationalized the same way.
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
A randomized controlled trial is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments.
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.
Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.
In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.
Observer bias is one of the types of detection bias and is defined as any kind of systematic divergence from accurate facts during observation and the recording of data and information in studies. The definition can be further expanded upon to include the systematic difference between what is observed due to variation in observers, and what the true value is.
A scientific control is an experiment or observation designed to minimize the effects of variables other than the independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method.
External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times. Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study.
Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment using randomization, such as by a chance procedure or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.
Clinical study design is the formulation of trials and experiments, as well as observational studies in medical, clinical and other types of research involving human beings. The goal of a clinical study is to assess the safety, efficacy, and / or the mechanism of action of an investigational medicinal product (IMP) or procedure, or new drug or device that is in development, but potentially not yet approved by a health authority. It can also be to investigate a drug, device or procedure that has already been approved but is still in need of further investigation, typically with respect to long-term effects or cost-effectiveness.
In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.
In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.
A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a definitive procedure that produces a test result. In order to ensure accurate and relevant test results, a test method should be "explicit, unambiguous, and experimentally feasible.", as well as effective and reproducible.
A Clinical Research Coordinator (CRC) is a person responsible for conducting clinical trials using good clinical practice (GCP) under the auspices of a Principal Investigator (PI).
A glossary of terms used in clinical research.
The Jadad scale, sometimes known as Jadad scoring or the Oxford quality scoring system, is a procedure to assess the methodological quality of a clinical trial by objective criteria. It is named after Canadian-Colombian physician Alex Jadad who in 1996 described a system for allocating such trials a score of between zero and five (rigorous). It is the most widely used such assessment in the world, and as of May 2024, its seminal paper has been cited in over 24,500 scientific works.
An N of 1 trial (N=1) is a multiple crossover clinical trial, conducted in a single patient. A trial in which random allocation is used to determine the order in which an experimental and a control intervention are given to a single patient is an N of 1 randomized controlled trial. Some N of 1 trials involve randomized assignment and blinding, but the order of experimental and control interventions can also be fixed by the researcher.
Placebo-controlled studies are a way of testing a medical therapy in which, in addition to a group of subjects that receives the treatment to be evaluated, a separate control group receives a sham "placebo" treatment which is specifically designed to have no real effect. Placebos are most commonly used in blinded trials, where subjects do not know whether they are receiving real or placebo treatment. Often, there is also a further "natural history" group that does not receive any treatment at all.
Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection.
NCCIH requires that study investigators submit a final protocol document for all funded clinical projects.