Research design

Last updated

Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. [1] A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers. [1]

Contents

Incorporated in the design of a research study will depend on the standpoint of the researcher over their beliefs in the nature of knowledge (see epistemology) and reality (see ontology), often shaped by the disciplinary areas the researcher belongs to. [2] [3]

The design of a study defines the study type (descriptive, correlational, semi-experimental, experimental, review, meta-analytic) and sub-type (e.g., descriptive-longitudinal case study), research problem, hypotheses, independent and dependent variables, experimental design, and, if applicable, data collection methods and a statistical analysis plan. [4] A research design is a framework that has been created to find answers to research questions.[ citation needed ]

Design types and sub-types

There are many ways to classify research designs. Nonetheless, the list below offers a number of useful distinctions between possible research designs. A research design is an arrangement of conditions or collection. [5]

Sometimes a distinction is made between "fixed" and "flexible" designs. In some cases, these types coincide with quantitative and qualitative research designs respectively, [6] though this need not be the case. In fixed designs, the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise, it is impossible to know in advance which variables need to be controlled and measured. Often, these variables are measured quantitatively. Flexible designs allow for more freedom during the data collection process. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, the theory might not be available before one starts the research.

Grouping

The choice of how to group participants depends on the research hypothesis and on how the participants are sampled. In a typical experimental study, there will be at least one "experimental" condition (e.g., "treatment") and one "control" condition ("no treatment"), but the appropriate method of grouping may depend on factors such as the duration of measurement phase and participant characteristics:

Confirmatory versus exploratory research

Confirmatory research tests a priori hypotheses — outcome predictions that are made before the measurement phase begins. Such a priori hypotheses are usually derived from a theory or the results of previous studies. The advantage of confirmatory research is that the result is more meaningful, in the sense that it is much harder to claim that a certain result is generalizable beyond the data set. The reason for this is that in confirmatory research, one ideally strives to reduce the probability of falsely reporting a coincidental result as meaningful. This probability is known as α-level or the probability of a type I error.

Exploratory research, on the other hand, seeks to generate a posteriori hypotheses by examining a data-set and looking for potential relations between variables. It is also possible to have an idea about a relation between variables but to lack knowledge of the direction and strength of the relation. If the researcher does not have any specific hypotheses beforehand, the study is exploratory with respect to the variables in question (although it might be confirmatory for others). The advantage of exploratory research is that it is easier to make new discoveries due to the less stringent methodological restrictions. Here, the researcher does not want to miss a potentially interesting relation and therefore aims to minimize the probability of rejecting a real effect or relation; this probability is sometimes referred to as β and the associated error is of type II. In other words, if the researcher simply wants to see whether some measured variables could be related, he would want to increase the chances of finding a significant result by lowering the threshold of what is deemed to be significant.

Sometimes, a researcher may conduct exploratory research but report it as if it had been confirmatory ('Hypothesizing After the Results are Known', HARKing [7] —see Hypotheses suggested by the data); this is a questionable research practice bordering on fraud.

State problems versus process problems

A distinction can be made between state problems and process problems. State problems aim to answer what the state of a phenomenon is at a given time, while process problems deal with the change of phenomena over time. Examples of state problems are the level of mathematical skills of sixteen-year-old children, the computer skills of the elderly, the depression level of a person, etc. Examples of process problems are the development of mathematical skills from puberty to adulthood, the change in computer skills when people get older, and how depression symptoms change during therapy.

State problems are easier to measure than process problems. State problems just require one measurement of the phenomena of interest, while process problems always require multiple measurements. Research designs such as repeated measurements and longitudinal study are needed to address process problems.

Examples of fixed designs

Experimental research designs

In an experimental design, the researcher actively tries to change the situation, circumstances, or experience of participants (manipulation), which may lead to a change in behavior or outcomes for the participants of the study. The researcher randomly assigns participants to different conditions, measures the variables of interest, and tries to control for confounding variables. Therefore, experiments are often highly fixed even before the data collection starts.

In a good experimental design, a few things are of great importance. First of all, it is necessary to think of the best way to operationalize the variables that will be measured, as well as which statistical methods would be most appropriate to answer the research question. Thus, the researcher should consider what the expectations of the study are as well as how to analyze any potential results. Finally, in an experimental design, the researcher must think of the practical limitations including the availability of participants as well as how representative the participants are to the target population. It is important to consider each of these factors before beginning the experiment. [8] Additionally, many researchers employ power analysis before they conduct an experiment, in order to determine how large the sample must be to find an effect of a given size with a given design at the desired probability of making a Type I or Type II error. The researcher has the advantage of minimizing resources in experimental research designs.

Non-experimental research designs

Non-experimental research designs do not involve a manipulation of the situation, circumstances or experience of the participants. Non-experimental research designs can be broadly classified into three categories. First, in relational designs, a range of variables are measured. These designs are also called correlation studies because correlation data are most often used in the analysis. Since correlation does not imply causation, such studies simply identify co-movements of variables. Correlational designs are helpful in identifying the relation of one variable to another, and seeing the frequency of co-occurrence in two natural groups (see Correlation and dependence). The second type is comparative research. These designs compare two or more groups on one or more variable, such as the effect of gender on grades. The third type of non-experimental research is a longitudinal design. A longitudinal design examines variables such as performance exhibited by a group or groups over time (see Longitudinal study).

Examples of flexible research designs

Case study

Famous case studies are for example the descriptions about the patients of Freud, who were thoroughly analysed and described.

Bell (1999) states "a case study approach is particularly appropriate for individual researchers because it gives an opportunity for one aspect of a problem to be studied in some depth within a limited time scale". [9]

Grounded theory study

Grounded theory research is a systematic research process that works to develop "a process, and action or an interaction about a substantive topic". [10]

See also

Related Research Articles

<span class="mw-page-title-main">Design of experiments</span> Design of tasks

The design of experiments, also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

<span class="mw-page-title-main">Empirical research</span> Research using empirical evidence

Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected. Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education.

<span class="mw-page-title-main">Statistics</span> Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

<span class="mw-page-title-main">Experiment</span> Scientific procedure performed to validate a hypothesis

An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.

Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence described in greater detail below.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

Experimental psychology refers to work done by those who apply experimental methods to psychological study and the underlying processes. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation, perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.

<span class="mw-page-title-main">Quantitative research</span> All procedures for the numerical representation of empirical facts

Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.

<span class="mw-page-title-main">Data dredging</span> Misuse of data analysis

Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

<span class="mw-page-title-main">Confounding</span> Variable or factor in causal inference

In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.

Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or "reasonable". This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to "reasonable" conclusions that use: quantitative, statistical, and qualitative data. Fundamentally, two types of errors can occur: type I and type II. Statistical conclusion validity concerns the qualities of the study that make these types of errors more likely. Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures.

<span class="mw-page-title-main">Quasi-experiment</span> Empirical interventional study

A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

<span class="mw-page-title-main">Quantitative methods in criminology</span>

Quantitative methods provide the primary research methods for studying the distribution and causes of crime. Quantitative methods provide numerous ways to obtain data that are useful to many aspects of society. The use of quantitative methods such as survey research, field research, and evaluation research as well as others. The data can, and is often, used by criminologists and other social scientists in making causal statements about variables being researched.

A glossary of terms used in experimental research.

Psychological research refers to research that psychologists conduct for systematic study and for analysis of the experiences and behaviors of individuals or groups. Their research can have educational, occupational and clinical applications.

References

  1. 1 2 Blair, Graeme; Coppock, Alexander; Humphreys, Macartan (2023), Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign, Princeton University Press, doi:10.1515/9780691199580, ISBN   978-0-691-19958-0
  2. Wright, Sarah; O'Brien, Bridget C.; Nimmon, Laura; Law, Marcus; Mylopoulos, Maria (2016). "Research Design Considerations". Journal of Graduate Medical Education. 8 (1): 97–98. doi:10.4300/JGME-D-15-00566.1. ISSN   1949-8349. PMC   4763399 . PMID   26913111.
  3. Tobi, Hilde; Kampen, Jarl K. (2018). "Research design: the methodology for interdisciplinary research framework". Quality & Quantity. 52 (3): 1209–1225. doi:10.1007/s11135-017-0513-8. ISSN   0033-5177. PMC   5897493 . PMID   29674791.
  4. Creswell, John W. (2014). Research design : qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks: SAGE Publications. ISBN   978-1-4522-2609-5.
  5. Muaz, Jalil Mohammad (2013), Practical Guidelines for conducting research. Summarizing good research practice in line with the DCED Standard
  6. Robson, C. (1993). Real-world research: A resource for social scientists and practitioner-researchers. Malden: Blackwell Publishing.
  7. Diekmann, Andreas (2011). "Are Most Published Research Findings False?". Jahrbücher für Nationalökonomie und Statistik. 231 (5–6): 628–635. doi:10.1515/jbnst-2011-5-606. ISSN   2366-049X. S2CID   117338880.
  8. Adèr, H. J., Mellenbergh, G. J., & Hand, D. J. (2008). Advising on research methods: a consultant's companion. Huizen: Johannes van Kessel Publishing. ISBN   978-90-79418-01-5
  9. Bell, J. (1999). Doing your research project. Buckingham: OUP.
  10. Creswell, J.W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Prentice Hall.