Survey methodology

Last updated

Survey methodology is "the study of survey methods". [1] As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Contents

Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology.

Overview

A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.

Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it. [2]

The most important methodological challenges of a survey methodologist include making decisions on how to: [2]

Selecting samples

The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest. [3] The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.

Modes of data collection

There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including

  1. costs,
  2. coverage of the target population,
  3. flexibility of asking questions,
  4. respondents' willingness to participate and
  5. response accuracy.

Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as: [4]

Research designs

There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies. [3]

Cross-sectional studies

In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once. [3] A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.

Successive independent samples studies

A successive independent samples design draws multiple random samples from a population at one or more times. [3] This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.

Longitudinal studies

Longitudinal studies take measure of the same random sample at multiple time points. [3] Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.

However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC). [5] These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet. [6] [7] Depending on the approach used, the ability to match some portion of the sample can be lost.

In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.

Questionnaires

A basic questionnaire in the Thai language Questionaire in Thai.png
A basic questionnaire in the Thai language

Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately. [3] Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate. [3]

Questionnaires as tools

A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample. [3] Demographic variables include such measures as ethnicity, socioeconomic status, race, and age. [3] Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale. [3] Self-report scales are also used to examine the disparities among people on scale items. [3] These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid. [3]

Reliability and validity of self-report measures

Reliable measures of self-report are defined by their consistency. [3] Thus, a reliable self-report measure produces consistent results every time it is executed. [3] A test's reliability can be measured a few ways. [3] First, one can calculate a test-retest reliability. [3] A test-retest reliability entails conducting the same questionnaire to a large sample at two different times. [3] For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest. [3] Self-report measures will generally be more reliable when they have many items measuring a construct. [3] Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested. [3] Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment. [3] Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure. [3] Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure. [3]

Composing a questionnaire

Six steps can be employed to construct a questionnaire that will produce reliable and valid results. [3] First, one must decide what kind of information should be collected. [3] Second, one must decide how to conduct the questionnaire. [3] Thirdly, one must construct a first draft of the questionnaire. [3] Fourth, the questionnaire should be revised. [3] Next, the questionnaire should be pretested. [3] Finally, the questionnaire should be edited and the procedures for its use should be specified. [3]

Guidelines for the effective wording of questions

The way that a question is phrased can have a large impact on how a research participant will answer the question. [3] Thus, survey researchers must be conscious of their wording when writing survey questions. [3] It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another. [3] There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions. [3] Free response questions are open-ended, whereas closed questions are usually multiple choice. [3] Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding. [3] Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder. [3] In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words. [3] Each question should be edited for "readability" and should avoid leading or loaded questions. [3] Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias. [3]

A respondent's answer to an open-ended question can be coded into a response scale afterwards, [4] or analysed using more qualitative methods.

Order of questions

Survey researchers should carefully construct the order of questions in a questionnaire. [3] For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end. [3] Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence. [3] Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.

Translating a questionnaire

Translation is crucial to collecting comparable survey data. Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. A team approach is recommended in the translation process to include translators, subject-matter experts and persons helpful to the process. [8] [9]

Survey translation best practice includes parallel translation, team discussions, and pretesting with real-life people. [10] [11] It is not a mechanical word placement process. The model TRAPD - Translation, Review, Adjudication, Pretest, and Documentation - originally developed for the European Social Surveys, is now "widely used in the global survey research community, although not always labeled as such or implemented in its complete form". [12] [13] [8] For example, sociolinguistics provides a theoretical framework for questionnaire translation and complements TRAPD. This approach states that for the questionnaire translation to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language. [14]

Nonresponse reduction

The following ways have been recommended for reducing nonresponse [15] in telephone and face-to-face surveys: [16]

Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important. [18] A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions). [19] Other studies showed that quality of response degraded toward the end of long surveys. [20]

Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail. [21]

Interviewer effects

Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race, [22] gender, [23] and relative body weight (BMI). [24] These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes, [25] interviewer sex responses to questions involving gender issues, [26] and interviewer BMI answers to eating and dieting-related questions. [27] While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.

The role of big data

Since 2018, survey methodologists have started to examine how big data can complement survey methodology to allow researchers and practitioners to improve the production of survey statistics and its quality. Big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms digital data. To date, there have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020, 2023, and a conference forthcoming in 2025, [28] a special issue in the Social Science Computer Review , [29] a special issue in the Journal of the Royal Statistical Society , [30] and a special issue in EP J Data Science, [31] and a book called Big Data Meets Social Sciences [32] edited by Craig A. Hill and five other Fellows of the American Statistical Association.

See also

Related Research Articles

<span class="mw-page-title-main">Sampling (statistics)</span> Selection of data points in statistics.

In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

An opinion poll, often simply referred to as a survey or a poll, is a human research survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals. A person who conducts polls is referred to as a pollster.

<span class="mw-page-title-main">Personality test</span> Method of assessing human personality constructs

A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

A structured interview is a quantitative research method commonly employed in survey research. The aim of this approach is to ensure that each interview is presented with exactly the same questions in the same order. This ensures that answers can be reliably aggregated and that comparisons can be made with confidence between sample sub groups or between different survey periods.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

<span class="mw-page-title-main">Response rate (survey)</span>

In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage. The term is also used in direct marketing to refer to the number of people who responded to an offer.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Cognitive pretesting, or cognitive interviewing, is a field research method where data is collected on how the subject answers interview questions. It is the evaluation of a test or questionnaire before it's administered. It allows survey researchers to collect feedback regarding survey responses and is used in evaluating whether the question is measuring the construct the researcher intends. The data collected is then used to adjust problematic questions in the questionnaire before fielding the survey to the full sample of people.

Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.

<span class="mw-page-title-main">Survey (human research)</span> List of questions aimed at obtaining data from a group of people

In research of human subjects, a survey is a list of questions aimed for extracting specific data from a particular group of people. Surveys may be conducted by phone, mail, via the internet, and also at street corners or in malls. Surveys are used to gather or gain knowledge in fields such as social research and demography.

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes, responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

In survey sampling, Total Survey Error includes all forms of survey error including sampling variability, interviewer effects, frame errors, response bias, and non-response bias. Total Survey Error is discussed in detail in many sources including Salant and Dillman.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.

The interviewer effect is the distortion of response to a personal or telephone interview which results from differential reactions to the social style and personality of interviewers or to their presentation of particular questions. The use of fixed-wording questions is one method of reducing interviewer bias. Anthropological research and case-studies are also affected by the problem, which is exacerbated by the self-fulfilling prophecy, when the researcher is also the interviewer it is also any effect on data gathered from interviewing people that is caused by the behavior or characteristics of the interviewer.

Willem Egbert (Wim) Saris is a Dutch sociologist and Emeritus Professor of Statistics and Methodology, especially known for his work on "Causal modelling in non-experimental research" and measurement errors.

References

  1. Groves, Robert M.; Fowler, Floyd J.; Couper, Mick P.; Lepkowski, James M.; Singer, Eleanor; Tourangeau, Roger (2004). "An introduction to survey methodology". Survey Methodology. Wiley Series in Survey Methodology. Vol. 561 (2 ed.). Hoboken, New Jersey: John Wiley & Sons (published 2009). p. 3. ISBN   9780470465462 . Retrieved 27 August 2020. [...] survey methodology is the study of survey methods. It is the study of sources of error in surveys and how to make the numbers produced by the surveys as accurate as possible.
  2. 1 2 Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN   978-1-118-21134-2.
  3. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill. pp.  161–175. ISBN   9780078035180.
  4. 1 2 Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.
  5. Audette, Lillian M.; Hammond, Marie S.; Rochester, Natalie K. (February 2020). "Methodological Issues With Coding Participants in Anonymous Psychological Longitudinal Studies". Educational and Psychological Measurement. 80 (1): 163–185. doi:10.1177/0013164419843576. ISSN   0013-1644. PMC   6943988 . PMID   31933497.
  6. Agley, Jon; Tidd, David; Jun, Mikyoung; Eldridge, Lori; Xiao, Yunyu; Sussman, Steve; Jayawardene, Wasantha; Agley, Daniel; Gassman, Ruth; Dickinson, Stephanie L. (February 2021). "Developing and Validating a Novel Anonymous Method for Matching Longitudinal School-Based Data". Educational and Psychological Measurement. 81 (1): 90–109. doi:10.1177/0013164420938457. ISSN   0013-1644. PMC   7797962 . PMID   33456063.
  7. Calatrava, Maria; de Irala, Jokin; Osorio, Alfonso; Benítez, Edgar; Lopez-del Burgo, Cristina (2021-08-12). "Matched and Fully Private? A New Self-Generated Identification Code for School-Based Cohort Studies to Increase Perceived Anonymity". Educational and Psychological Measurement. 82 (3): 465–481. doi:10.1177/00131644211035436. ISSN   0013-1644. PMC   9014735 . PMID   35444340. S2CID   238718313.
  8. 1 2 Harkness, Janet (2003). Cross-cultural survey methods. Wiley. ISBN   0-471-38526-3.
  9. Sha, Mandy; Immerwahr, Stephen (2018-02-19). "Survey Translation: Why and How Should Researchers and Managers be Engaged?". Survey Practice. 11 (2): 1–10. doi: 10.29115/SP-2018-0016 .
  10. "Special issue on questionnaire translation". World Association of Public Opinion Research. Retrieved October 2, 2023.
  11. Behr, Dorothee; Sha, Mandy (2018). "Translation of questionnaires in cross-national and cross-cultural research". Translation & Interpreting. 10 (2): 1–4.
  12. "Quality in Comparative Surveys" (PDF). Task Force Report, American Association of Public Opinion Research (AAPOR). Retrieved October 2, 2023.
  13. "Quality in Comparative Surveys". Task Force Report, World Association of Public Opinion Research (WAPOR).
  14. Pan, Yuling; Sha, Mandy (2019). The Sociolinguistics of Survey Translation. Routledge Taylor & Francis. ISBN   978-1138550865.
  15. Lynn, P. (2008) "The problem of non-response", chapter 3, 35-55, in International Handbook of Survey Methodology (ed.s Edith de Leeuw, Joop Hox & Don A. Dillman). Erlbaum. ISBN   0-8058-5753-2
  16. Dillman, D.A. (1978) Mail and telephone surveys: The total design method. Wiley. ISBN   0-471-21555-4
  17. De Leeuw, E.D. (2001). "I am not selling anything: Experiments in telephone introductions". Kwantitatieve Methoden, 22, 41–48.
  18. Bogen, Karen (1996). "The effect of questionnaire length on response rates -- a review of the literature" (PDF). Proceedings of the Section on Survey Research Methods. American Statistical Association: 1020–1025. Archived from the original (PDF) on Apr 2, 2013. Retrieved 2013-03-19.
  19. Chudoba, Brent (2010-12-10). "Does adding one more question impact survey completion rate?". SurveyMonkey. Retrieved 2017-11-08.
  20. "Respondent engagement and survey length: the long and the short of it". Research Live. April 7, 2010. Retrieved 2013-10-03.
  21. Agley, Jon; Meyerson, Beth; Eldridge, Lori; Smith, Carriann; Arora, Prachi; Richardson, Chanel; Miller, Tara (February 2019). "Just the fax, please: Updating electronic/hybrid methods for surveying pharmacists". Research in Social and Administrative Pharmacy. 15 (2): 226–227. doi:10.1016/j.sapharm.2018.10.028. PMID   30416040. S2CID   53281364.
  22. Hill, M.E (2002). "Race of the interviewer and perception of skin color: Evidence from the multi-city study of urban inequality". American Sociological Review. 67 (1): 99–108. doi:10.2307/3088935. JSTOR   3088935.
  23. Flores-Macias, F.; Lawson, C. (2008). "Effects of interviewer gender on survey responses: Findings from a household survey in Mexico" (PDF). International Journal of Public Opinion Research. 20 (1): 100–110. doi:10.1093/ijpor/edn007. S2CID   33820854. Archived from the original (PDF) on 2019-03-07.
  24. Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.; Van Strien, T. (2011). "BMI of interviewer effects". International Journal of Public Opinion Research. 23 (4): 530–543. doi:10.1093/ijpor/edr026.
  25. Anderson, B.A.; Silver, B.D.; Abramson, P.R. (1988). "The effects of the race of the interviewer on race-related attitudes of black respondents in SRC/CPS national election studies". Public Opinion Quarterly. 52 (3): 1–28. doi:10.1086/269108.
  26. Kane, E.W.; MacAulay, L.J. (1993). "Interviewer gender and gender attitudes". Public Opinion Quarterly. 57 (1): 1–28. doi:10.1086/269352.
  27. Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B. (2011). "Interviewer BMI effects on under- and over-reporting of restrained eating. Evidence from a national Dutch face-to-face survey and a postal follow-up". International Journal of Public Health. 57 (3): 643–647. doi:10.1007/s00038-011-0323-z. PMC   3359459 . PMID   22116390.
  28. "BigSurv". www.bigsurv.org. Retrieved 2023-10-21.
  29. Eck, Adam; Cazar, Ana Lucía Córdova; Callegaro, Mario; Biemer, Paul (August 2021). ""Big Data Meets Survey Science"". Social Science Computer Review. 39 (4): 484–488. doi: 10.1177/0894439319883393 . ISSN   0894-4393.
  30. "Special issue: Big data meets survey science". Journal of the Royal Statistical Society Series A: Statistics in Society. 185 (Supplement_2). December 2022.
  31. "Integrating Survey and Non-survey Data to Measure Behavior and Public Opinion". EPJ Data Science.
  32. Hill, Craig A.; Biemer, Paul P.; Buskirk, Trent D.; Japec, Lilli; Kirchner, Antje; Kolenikov, Stas; Lyberg, Lars, eds. (2021). Big data meets survey science: a collection of innovative methods. Hoboken, NJ: Wiley. ISBN   978-1-118-97632-6.

Further reading