Common source bias

Last updated

Common source bias is a type of sampling bias, occurring when both dependent and independent variables are collected from the same group of people. This bias can occur in various forms of research, such as surveys, experiments, and observational studies. [1] Some scholars believe that common source bias is a significant concern for any study as it can lead to unreliable results, and therefore must be controlled. [2] [3] [4] [5] It is most prevalent in the field of public administration research, where performance measures subject to common source bias can produce false positives when organisational performance is evaluated with explanatory and perceptual measures from the same source. [6]

Contents

Occurrence

Common source bias can be categorised into two types: common method bias, also known as common method variance, and common source bias. Common method bias occurs when the same method or instrument is used to collect data from multiple sources, which can lead to an over-representation of certain factors. Common source bias occurs when the information or data collected is influenced by a single source, such as a single individual, group, or organisation.

One of the major causes of common source bias is the influence of the source on the data collected. [7] For example, if a survey is conducted by a single individual, their own beliefs, biases, and perspectives can influence the responses of the participants. This "self reporting" is subjective, and limited because it is based on attitudes, values, and behaviours of the individual [8] [9] .

Common source bias is also present in participant selection. If participants are selected based on their association with the source, then their responses may be biased towards the source’s perspective. If participants are selected based on their willingness to participate, then their responses may not be representative of the population as a whole.

Remedies

Ex ante remedies

A recently proposed ex ante remedy for common source bias is the supplementation of survey data with administrative and/or archival data, however the majority of relevant studies seem to present the view that of the proposed statistical remedies for the bias, none appear to reliably counter the issue. [10] [11] This results in a lack of a comprehensive methodology for how to control method biases. [12] [13]

Related Research Articles

<span class="mw-page-title-main">Meta-analysis</span> Statistical method that summarizes and or integrates data from multiple sources

Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing an effect size across all of the studies; this involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Observer bias is one of the types of detection bias and is defined as any kind of systematic divergence from accurate facts during observation and the recording of data and information in studies. The definition can be further expanded upon to include the systematic difference between what is observed due to variation in observers, and what the true value is.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

In industrial and organizational psychology, organizational citizenship behavior (OCB) is a person's voluntary commitment within an organization or company that is not part of his or her contractual tasks. Organizational citizenship behavior has been studied since the late 1970s. Over the past three decades, interest in these behaviors has increased substantially.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

Transfer of training is applying knowledge and skills acquired during training to a targeted job or role. This is a term commonly used within industrial and organizational psychology.

Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.

In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.

Participation bias or non-response bias is a phenomenon in which the results of elections, studies, polls, etc. become non-representative because the participants disproportionately possess certain traits which affect the outcome. These traits mean the sample is systematically different from the target population, potentially resulting in biased estimates.

The multitrait-multimethod (MTMM) matrix is an approach to examining construct validity developed by Campbell and Fiske (1959). It organizes convergent and discriminant validity evidence for comparison of how a measure relates to other measures. The conceptual approach has influenced experimental design and measurement theory in psychology, including applications in structural equation models.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.

In applied statistics,, common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to the constructs the measures are assumed to represent" or equivalently as "systematic error variance shared among variables measured with and introduced as a function of the same method and/or source". For example, an electronic survey method might influence results for those who might be unfamiliar with an electronic survey interface differently than for those who might be familiar. If measures are affected by CMV or common-method bias, the intercorrelations among them can be inflated or deflated depending upon several factors. Although it is sometimes assumed that CMV affects all variables, evidence suggests that whether or not the correlation between two variables is affected by CMV is a function of both the method and the particular constructs being measured.

Event sampling methodology (ESM) refers to a diary study. ESM is also known as ecological momentary assessment (EMA) or experience sampling methodology. ESM includes sampling methods that allow researchers to study ongoing experiences and events by taking assessments one or more times per day per participant (n=1) in the naturally occurring social environment. ESM enables researchers to study the prevalence of behaviors, promote theory development, and to serve an exploratory role. The frequent sampling of events inherent in ESM enables researchers to measure the typology of activity and detect the temporal and dynamic fluctuations of experiences. The popularity of ESM as a new form of research design increased over the recent years, because it addresses the shortcomings of cross-sectional research which cannot detect intra-individual variances and processes across time and cause-effect relationships. In ESM, participants are asked to record their experiences and perceptions in a paper or electronic diary. Diary studies allow for the studying of events that occur naturally but are difficult to examine in the lab. For conducting event sampling, SurveySignal and Expimetrics. are becoming popular platforms for social science researchers.

Meta-regression is defined to be a meta-analysis that uses regression analysis to combine, compare, and synthesize research findings from multiple studies while adjusting for the effects of available covariates on a response variable. A meta-regression analysis aims to reconcile conflicting studies or corroborate consistent ones; a meta-regression analysis is therefore characterized by the collated studies and their corresponding data sets—whether the response variable is study-level data or individual participant data. A data set is aggregate when it consists of summary statistics such as the sample mean, effect size, or odds ratio. On the other hand, individual participant data are in a sense raw in that all observations are reported with no abridgment and therefore no information loss. Aggregate data are easily compiled through internet search engines and therefore not expensive. However, individual participant data are usually confidential and are only accessible within the group or organization that performed the studies.

Diary studies is a research method that collects qualitative information by having participants record entries about their everyday lives in a log, diary or journal about the activity or experience being studied. This collection of data uses a longitudinal technique, meaning participants are studied over a period of time. This research tool, although not being able to provide results as detailed as a true field study, can still offer a vast amount of contextual information without the costs of a true field study. Diary studies are also known as experience sampling or ecological momentary assessment (EMA) methodology.

Philip Michael Podsakoff is an American management professor, researcher, author, and consultant who held the John F. Mee Chair of Management at Indiana University. Currently, he is the Hyatt and Cici Brown Chair in Business at the University of Florida.

<span class="mw-page-title-main">Replication crisis</span> Observed inability to reproduce scientific studies

The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.

Conservation of Resources (COR) Theory is a stress theory that describes the motivation that drives humans to both maintain their current resources and to pursue new resources. This theory was proposed by Dr. Stevan E. Hobfoll in 1989 as a way to expand on the literature of stress as a construct.

<span class="mw-page-title-main">Wiki survey</span> Survey method for crowdsourcing opinions

Wiki surveys or wikisurveys are a software-based survey method with similarity to how wikis evolve through crowdsourcing. In essence, they are surveys that allow participants to create the questions that are being asked. As participants engage in the survey they can either vote on a survey question or create a survey question. A single open-ended prompt written by the creator of the survey determines the topic the questions should be on. The first known implementation of a wiki survey was in 2010, and they have been used since then for a variety of purposes such as facilitating deliberative democracy, crowdsourcing opinions from experts and figuring out common beliefs on a given topic. A notable usage of wiki surveys is in Taiwan's government system, where citizens can participate in crowdsourced lawmaking through Polis wiki surveys.

References

  1. Baumgartner, Hans; Weijters, Bert (2021-06-01). "Dealing with Common Method Variance in International Marketing Research". Journal of International Marketing. 29 (3): 7–22. doi:10.1177/1069031X21995871. ISSN   1069-031X.
  2. Bagozzi, Richard P. (2011). "Measurement and Meaning in Information Systems and Organizational Research: Methodological and Philosophical Foundations". MIS Quarterly. 35 (2): 261–292. doi:10.2307/23044044. ISSN   0276-7783.
  3. Burton-Jones, Andrew (2009). "Minimizing Method Bias through Programmatic Research". MIS Quarterly. 33 (3): 445–471. doi:10.2307/20650304. ISSN   0276-7783.
  4. Podsakoff, Philip M.; Podsakoff, Nathan P.; Williams, Larry J.; Huang, Chengquan; Yang, Junhui (2024-01-22). "Common Method Bias: It's Bad, It's Complex, It's Widespread, and It's Not Easy to Fix". Annual Review of Organizational Psychology and Organizational Behavior. 11 (1): 17–61. doi:10.1146/annurev-orgpsych-110721-040030. ISSN   2327-0608.
  5. Favero, N.; Bullock, J. B. (2015). "How (Not) to Solve the Problem: An Evaluation of Scholarly Responses to Common Source Bias". Journal of Public Administration Research and Theory. pp. 285–308. doi:10.1093/jopart/muu020 . Retrieved 2023-02-07.
  6. Andersen, Lotte Bøgh; Heinesen, Eskil; Pedersen, Lene Holm (2016). "Individual Performance: From Common Source Bias to Institutionalized Assessment". Journal of Public Administration Research and Theory. 26 (1): 63–78. ISSN   1053-1858. JSTOR   44165112.
  7. Craighead, Christopher W.; Ketchen, David J.; Dunn, Kaitlin S.; Hult, G. Tomas M. (2011-05-05). "Addressing Common Method Variance: Guidelines for Survey Research on Information Technology, Operations, and Supply Chain Management". IEEE Transactions on Engineering Management. 58 (3): 578–588. doi:10.1109/TEM.2011.2136437. ISSN   0018-9391.
  8. Cooper, Brian; Eva, Nathan; Zarea Fazlelahi, Forough; Newman, Alexander; Lee, Allan; Obschonka, Martin (2020-09-01). "Addressing common method variance and endogeneity in vocational behavior research: A review of the literature and suggestions for future research". Journal of Vocational Behavior. 121: 103472. doi:10.1016/j.jvb.2020.103472. ISSN   0001-8791.
  9. Podsakoff, Philip M.; Organ, Dennis W. (1986-12-01). "Self-Reports in Organizational Research: Problems and Prospects". Journal of Management. 12 (4): 531–544. doi:10.1177/014920638601200408. ISSN   0149-2063.
  10. Favero, Nathan; B. Bullock, Justin. "How (Not) to Solve the Problem: An Evaluation of Scholarly Responses to Common Source Bias".{{cite journal}}: Cite journal requires |journal= (help)
  11. Kim, Mirae; Daniel, Jamie Levine (2020-01-02). "Common Source Bias, Key Informants, and Survey-Administrative Linked Data for Nonprofit Management Research". Public Performance & Management Review. 43 (1): 232–256. doi:10.1080/15309576.2019.1657915. ISSN   1530-9576. S2CID   203468837.
  12. Podsakoff, Philip M.; Podsakoff, Nathan P.; Williams, Larry J.; Huang, Chengquan; Yang, Junhui (2024-01-22). "Common Method Bias: It's Bad, It's Complex, It's Widespread, and It's Not Easy to Fix". Annual Review of Organizational Psychology and Organizational Behavior. 11 (1): 17–61. doi:10.1146/annurev-orgpsych-110721-040030. ISSN   2327-0608.
  13. Podsakoff, Philip M.; MacKenzie, Scott B.; Lee, Jeong-Yeon; Podsakoff, Nathan P. (2003). "Common method biases in behavioral research: A critical review of the literature and recommended remedies". Journal of Applied Psychology. 88 (5): 879–903. doi:10.1037/0021-9010.88.5.879. ISSN   1939-1854.