Questionnaire construction

Last updated

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Contents

Questionnaires

Questionnaires are frequently used in quantitative marketing research and social research. They are a valuable method of collecting a wide range of information from a large number of individuals, often referred to as respondents.

What is often referred to as "adequate questionnaire construction" is critical to the success of a survey. Inappropriate questions, incorrect ordering of questions, incorrect scaling, or a bad questionnaire format can make the survey results valueless, as they may not accurately reflect the views and opinions of the participants.

Different methods can be useful for checking a questionnaire and making sure it is accurately capturing the intended information. Initial advice may include:

Empirical tests also provide insight into the quality of the questionnaire. This can be done by:

Test items

In the realm of psychological testing and questionnaires, an individual task or question is referred to as a test Item or item [6] [7] . These items serve as fundamental components within questionnaire and psychological tests, often tied to a specific latent psychological construct (see operationalization). Each item produces a value, typically a raw score, which can be aggregated across all items to generate a composite score for the measured trait.

Test items generally encompass three primary components:

  1. Item stem: This represents the question, statement, or task presented.
  2. Answer format: The manner in which the respondent provides an answer, including options for multiple-choice questions.
  3. Evaluation criteria: The criteria used to assess and score the response.

The degree of standardization varies, ranging from strictly prescribed questions with predetermined answers to open-ended questions with subjective evaluation criteria.

Responses to test items serve as indicators in the realm of social sciences.

Types of questions

Questions, or items, may be:

Multi-item scales

Labelled example of a multi-item psychometric scale as used in questionnaires Multi-item psychometric scale.jpg
Labelled example of a multi-item psychometric scale as used in questionnaires

Within social science research and practice, questionnaires are most frequently used to collect quantitative data using multi-item scales with the following characteristics: [8]

Pretesting

Pretesting is testing and evaluating whether a questionnaire causes problems that could affect data quality and data collection for interviewers or survey respondents.

Pretesting methods can be quantitative or qualitative, and can be conducted in a laboratory setting or in the field. [9] [10] [11]

A multiple-method approach helps to triangulate results. For example, cognitive interviews, usability testing, behavior coding, and/or vignettes can be combined for pretesting. [15] [19] [11]

Questionnaire construction issues

Before constructing a questionnaire survey, it is advisable to consider how the results of the research will be used. If the results won't influence the decision-making process, budgets won't allow implementing the findings, or the cost of research outweighs its usefulness, then there is little purpose in conducting the research.

The research objective(s) and frame-of-reference should be defined beforehand, including the questionnaire's context of time, budget, manpower, intrusion and privacy. The types of questions (e.g.: closed, multiple-choice, open) should fit the data analysis techniques available and the goals of the survey.

The manner (random or not) and location (sampling frame) for selecting respondents will determine whether the findings will be representative of the larger population.

The level of measurement – known as the scale, index, or typology – will determine what can be concluded from the data. A yes/no question will only reveal how many of the sample group answered yes or no, lacking the resolution to determine an average response. The nature of the expected responses should be defined and retained for interpretation.

A common method is to "research backwards" in building a questionnaire by first determining the information sought (i.e., Brand A is more/less preferred by x% of the sample vs. Brand B, and y% vs. Brand C), then being certain to ask all the needed questions to obtain the metrics for the report. Unneeded questions should be avoided, as they are an expense to the researcher and an unwelcome imposition on the respondents. All questions should contribute to the objective(s) of the research.

Topics should fit the respondents' frame of reference, as their background may affect their interpretation of the questions. Respondents should have enough information or expertise to answer the questions truthfully. Writing style should be conversational, yet concise and accurate and appropriate to the target audience and subject matter. The wording should be kept simple, without technical or specialized vocabulary. Ambiguous words, equivocal sentence structures and negatives may cause misunderstanding, possibly invalidating questionnaire results. Double negatives should be reworded as positives.

If a survey question actually contains more than one issue, the researcher will not know which one the respondent is answering. Care should be taken to ask one question at a time.

Questions and prepared responses (for multiple-choice) should be neutral as to intended outcome. A biased question or questionnaire encourages respondents to answer one way rather than another. [20] Even questions without bias may leave respondents with expectations. The order or grouping of questions is also relevant; early questions may bias later questions. Loaded questions evoke emotional responses and may skew results.

The list of prepared responses should be collectively exhaustive; one solution is to use a final write-in category for "other ________". The possible responses should also be mutually exclusive, without overlap. Respondents should not find themselves in more than one category, for example in both the "married" category and the "single" category (in such a case there may be need for separate questions on marital status and living situation).

Many people will not answer personal or intimate questions. For this reason, questions about age, income, marital status, etc. are generally placed at the end of the survey. This way, even if the respondent refuses to answer these questions, he/she will have already answered the research questions.

Visual presentation of the questions on the page (or computer screen) and use of white space, colors, pictures, charts, or other graphics may affect respondent's interest – or distract from the questions. There are four primary design elements: words (meaning), numbers (sequencing), symbols (e.g. arrow), and graphics (e.g. text boxes). [1] In translated questionnaires, the design elements also take into account the writing practice (e.g. Spanish words are lengthier and require more space on the page or on the computer screen) and text orientation (e.g. Arabic is read from right to left) to prevent data missingness. [21] [22]

Questionnaires can be administered by research staff, by volunteers or self-administered by the respondents. Clear, detailed instructions are needed in either case, matching the needs of each audience

Methods of collection

There are a number of channels, or modes, that can be used to administer a questionnaire. Each has strengths and weaknesses, and therefore a researcher will generally need to tailor their questionnaire to the modes they will be using. For example, a questionnaire designed to be filled-out on paper may not operate in the same way when administered by telephone. These mode effects may be substantial enough that they threaten the validity of the research.

Using multiple modes can improve access to the population of interest when some members have different access, or have particular preferences.

MethodBenefits and cautions
Postal
  • Usually a simple questionnaire, printed on paper to be filled-out with a pen or pencil.
  • Low cost-per-response for small samples. Large samples can often be administered more efficiently by using optical character recognition.
  • Mail is subject to postal delays and errors, which can be substantial when posting to remote areas, or given unpredictable events such as natural disasters.
  • Surveys are limited to populations that are contactable by a mail service.
  • Reliant on high levels of literacy
  • Allows survey participants to remain anonymous (e.g. using identical paper forms).
  • Limited ability to build rapport with the respondent, or to answer questions about the purpose of the research.
Telephone
  • Questionnaires can be conducted swiftly, particularly if computer-assisted.
  • Opportunity to build rapport with respondents may improve response rates.
  • Researchers may be mistaken for being telemarketers.
  • Surveys are limited to populations with a telephone.
  • Are more prone to social desirability biases than other modes, so telephone interviews are generally not suitable for sensitive topics. [23] [24]
Electronic
  • Usually administered via a HTML-based webpage, or other electronic channel such as a smartphone app.
  • This method has a low ongoing-cost, and most surveys cost little for the participants and surveyors. However, initial set-up costs can be high for a customised design due to the effort required in developing the back-end system or programming the questionnaire itself.
  • Questionnaires can be conducted swiftly, without postal delays.
  • Survey participants can choose to remain anonymous, though risk being tracked through cookies, unique links and other technology.
  • It is not labour-intensive.
  • Questions can be more detailed, as opposed to the limits of paper or telephones. [25]
  • This method works well if the survey contains several branching questions. Help or instructions can be dynamically displayed with the question as needed, and automatic sequencing means the computer can determine the next question, rather than relying on respondents to correctly follow skip instructions.
  • Not all of the sample may be able to use the electronic form due to accessibility issues, software compatibility, bandwidth requirements, server load, or internet access, and therefore results may not be representative of the target population.
Personally administered
  • Questions can be more detailed and obtains more comprehensive information. However, respondents are often limited to their working memory: specially designed visual cues (such as prompt cards) may help in some cases.
  • Interviewers sometimes rephrase questions during the interview, reducing the level of standardisation. Computer-assisted personal interviewing may assist with this.
  • Rapport with respondents is generally higher than other modes.
  • Typically higher response-rate than other modes.
  • Can be extremely expensive and time-consuming to train and maintain an interviewer panel. Each interview also has a cost associated with collecting the data.
  • Relatively few limits to the population, so long as an interviewer is granted access.

Question wording

The way that a question is phrased can have a large impact on how a research participant will answer the question. [26] Thus, survey researchers must be conscious of their wording when writing survey questions. [26] It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another. [26]

There are two different types of questions that survey researchers use when writing a questionnaire: free-response questions and closed questions. [26] Free-response questions are open-ended, whereas closed questions are usually multiple-choice. [26] Free-response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding. [26] Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder. [26]

In general, the vocabulary of a question should be very simple and direct, and preferably under twenty words. [26] Each question should be edited for readability and should avoid leading or loaded questions. [26] If multiple questions are being used to measure one construct, some of the questions should be worded in the opposite direction to evade response bias. [26]

A respondent's answer to an open-ended question can be coded into a response scale afterwards, [27] or analysed using more qualitative methods.

Question sequence

Questions should flow logically, from the general to the specific, from least to most sensitive, from factual and behavioral matters to attitudes and opinions. When semi-automated, they should flow from unaided to aided questions. The researcher should ensure that the answer to a question is not influenced by previous questions.

According to the three-stage theory (also called the sandwich theory), questions should be asked in three stages:[ citation needed ]

  1. screening and rapport questions
  2. product-specific questions
  3. demographic types of questions

See also

Related Research Articles

Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence described in greater detail below.

In the social sciences, scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

<span class="mw-page-title-main">Likert scale</span> Psychometric measurement scale

A Likert scale is a psychometric scale named after its inventor, American social psychologist Rensis Likert, which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.

<span class="mw-page-title-main">Personality test</span> Method of assessing human personality constructs

A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

SERVQUAL is a multi-dimensional research instrument designed to capture consumer expectations and perceptions of a service along five dimensions that are believed to represent service quality. SERVQUAL is built on the expectancy–disconfirmation paradigm, which, in simple terms, means that service quality is understood as the extent to which consumers' pre-consumption expectations of quality are confirmed or disconfirmed by their actual perceptions of the service experience. When the SERVQUAL questionnaire was first published in 1985 by a team of academic researchers, A. Parasuraman, Valarie Zeithaml and Leonard L. Berry to measure quality in the service sector, it represented a breakthrough in the measurement methods used for service quality research. The diagnostic value of the instrument is supported by the model of service quality which forms the conceptual framework for the development of the scale. The instrument has been widely applied in a variety of contexts and cultural settings and found to be relatively robust. It has become the dominant measurement scale in the area of service quality. In spite of the long-standing interest in SERVQUAL and its myriad of context-specific applications, it has attracted some criticism from researchers.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Cognitive pretesting, or cognitive interviewing, is a field research method where data is collected on how the subject answers interview questions. It is the evaluation of a test or questionnaire before it's administered. It allows survey researchers to collect feedback regarding survey responses and is used in evaluating whether the question is measuring the construct the researcher intends. The data collected is then used to adjust problematic questions in the questionnaire before fielding the survey to the full sample of people.

A rating scale is a set of categories designed to obtain information about a quantitative or a qualitative attribute. In the social sciences, particularly psychology, common examples are the Likert response scale and 0-10 rating scales, where a person selects the number that reflecting the perceived quality of a product.

Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.

Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.

Food frequency questionnaire (FFQ) is a dietary assessment tool delivered as a questionnaire to estimate frequency and, in some cases, portion size information about food and beverage consumption over a specified period of time, typically the past month, three months, or year. FFQs are a common dietary assessment tool used in large epidemiologic studies of nutrition and health. Examples of usage include assessment of intake of vitamins and other nutrients, assessment of the intake of toxins, and estimating the prevalence of dietary patterns such as vegetarianism.

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes, responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.

<span class="mw-page-title-main">Computer-assisted survey information collection</span>

Computer-assisted survey information collection (CASIC) refers to a variety of survey modes that were enabled by the introduction of computer technology. The first CASIC modes were interviewer-administered, while later on computerized self-administered questionnaires (CSAQ) appeared. It was coined in 1990 as a catch-all term for survey technologies that have expanded over time.

The Social Support Questionnaire (SSQ) is a quantitative, psychometrically sound survey questionnaire intended to measure social support and satisfaction with said social support from the perspective of the interviewee. Degree of social support has been shown to influence the onset and course of certain psychiatric disorders such as clinical depression or schizophrenia. The SSQ was approved for public release in 1981 by Irwin Sarason, Henry Levine, Robert Basham and Barbara Sarason under the University of Washington Department of Psychology and consists of 27 questions. Overall, the SSQ has good test-retest reliability and convergent internal construct validity.

References

  1. 1 2 Dillman, Don A., Smyth, Jolene D., Christian, Leah Melani. 2014. Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. John Wiley: Hoboken, NJ
  2. Lord, F. and Novick, M. R.(1968). Statistical theories of mental test scores. Addison – Wesley.
  3. Heise, D. R.(1969). Separating reliability and stability in test-retest correlation. American Sociological Review, 34, 93-101. https://dx.doi.org/10.2307/2092790
  4. Andrews, F. M. (1984). Construct validity and error components of survey measures: a structural modelling approach. Public Opinion Quarterly, 48, 409-442. https://dx.doi.org/10.1086/268840
  5. Saris, W. E. and Gallhofer, I. N. (2014). Design, evaluation and analysis of questionnaires for survey research. Second Edition. Hoboken, Wiley.
  6. Osterlind, S. J. (2005). Constructing Test Items: Multiple-Choice, Constructed-Response, Performance and Other Formats. Deutschland: Kluwer Academic Publishers. https://books.google.de/books?id=IpMRBwAAQBAJ&pg=PA19
  7. Haladyna, T. M., Rodriguez, M. C. (2013). Developing and Validating Test Items. USA: Taylor & Francis.
  8. 1 2 Robinson, M. A. (2018). Using multi-item psychometric scales for research and practice in human resource management. Human Resource Management, 57(3), 739–750. https://dx.doi.org/10.1002/hrm.21852 (open-access)
  9. 1 2 Presser, Stanley (March 2004). "Methods for Testing and Evaluating Survey Questions". Public Opinion Quarterly . 68 (1): 109–130. doi:10.1093/poq/nfh008.
  10. Rothgeb, Jennifer (2008). "Pilot Test". In Lavrakas, Paul (ed.). Encyclopedia of Survey Research Methods. Sage Publishing. doi:10.4135/9781412963947. ISBN   9781412918084.
  11. 1 2 Tourangeau, Roger (2019). "A Framework for Making Decisions About Question Evaluation Methods". Advances in Questionnaire Design, Development, Evaluation and Testing. Wiley Publishing. pp. 47–69. doi:10.1002/9781119263685.ch3.
  12. Willis, Gordon (2005). Cognitive interviewing: A tool for improving questionnaire design. Sage Publishing. ISBN   9780761928041.
  13. "Web Probing". GESIS - Leibniz Institute for the Social Sciences. Retrieved 2023-10-24.
  14. Martin, Elizabeth (2004-06-25). "Vignettes and Respondent Debriefing for Questionnaire Design and Evaluation". In Presser, Stanley; Rothgeb, Jennifer M.; Couper, Mick P.; Lessler, Judith T.; Martin, Elizabeth; Martin, Jean; Singer, Eleanor (eds.). Methods for Testing and Evaluating Survey Questionnaires (1 ed.). Wiley. doi:10.1002/0471654728. ISBN   978-0-471-45841-8.
  15. 1 2 Sha, Mandy (2016-08-01). "The Use of Vignettes in Evaluating Asian Language Questionnaire Items". Survey Practice. 9 (3): 1–8. doi:10.29115/SP-2016-0013.
  16. Ongena, Yfke; Dijkstra, Wil (2006). "Methods of Behavior Coding of Survey Interviews" (PDF). Journal of Official Statistics . 22 (3): 419–451.
  17. Kapousouz, Evgenia; Johnson, Timothy; Holbrook, Allyson (2020). "Seeking Clarifications for Problematic Questions: Effects of Interview Language and Respondent Acculturation (Chapter 2)". In Sha, Mandy; Gabel, Tim (eds.). The essential role of language in survey research. RTI Press. pp. 23–46. doi: 10.3768/rtipress.bk.0023.2004 . ISBN   978-1-934831-23-6.
  18. Yan, T.; Kreuter, F.; Tourangeau, R (December 2012). "Evaluating Survey Questions: A Comparison of Methods". Journal of Official Statistics . 28 (4): 503–529.
  19. Aizpurua, Eva (2020). "Pretesting methods in cross-cultural research (Chapter 7)". In Sha, Mandy; Gabel, Tim (eds.). The essential role of language in survey research. RTI Press. pp. 129–150. doi: 10.3768/rtipress.bk.0023.2004 . ISBN   978-1-934831-23-6.
  20. Timothy R. Graeff, 2005. "Response Bias", Encyclopedia of Social Measurement, pp. 411-418. ScienceDirect.
  21. Pan, Yuling; Sha, Mandy (2019-07-09). The Sociolinguistics of Survey Translation. London: Routledge. doi:10.4324/9780429294914/sociolinguistics-survey-translation-yuling-pan-mandy-sha-hyunjoo-park. ISBN   978-0-429-29491-4.
  22. Wang, Kevin; Sha, Mandy (2013-03-01). "A Comparison of Results from a Spanish and English Mail Survey: Effects of Instruction Placement on Item Missingness". Survey Methods: Insights from the Field (SMIF). doi: 10.13094/SMIF-2013-00006 . ISSN   2296-4754.
  23. Frauke Kreuter, Stanley Presser, and Roger Tourangeau, 2008. "Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity", Public Opinion Quarterly, 72(5): 847-865 first published online January 26, 2009 doi : 10.1093/poq/nfn063
  24. Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick, 2003. "Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias". Public Opinion Quarterly,67(1): 79-125. doi : 10.1086/346010.
  25. Respicius, Rwehumbiza (2010)
  26. 1 2 3 4 5 6 7 8 9 10 Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill. pp.  161–175. ISBN   9780078035180.
  27. Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.

Further reading