Cognitive pretesting

Last updated

Cognitive pretesting, or cognitive interviewing, is a field research method where data is collected on how the subject answers interview questions. It is the evaluation of a test or questionnaire before it's administered. [1] It allows survey researchers to collect feedback regarding survey responses and is used in evaluating whether the question is measuring the construct the researcher intends. The data collected is then used to adjust problematic questions in the questionnaire before fielding the survey to the full sample of people. [2] [3] [4] [5] [6]

Contents

Cognitive interviewing generally collects the following information from participants: evaluations on how the subject constructed their answers; explanations on what the subject interprets the questions to mean; reporting of any difficulties the subject had in answering the questions; and anything else that reveals the circumstances to the subject's answers.

Cognitive pretesting is considered essential in testing the validity of an interview, test, or questionnaire. [7]

Purpose

The purpose of these pretests is to:

Types

In general, there are many methods practiced when conducting a cognitive pretest. Including: conventional pretesting, cognitive interviewing, behavior coding, respondent debriefing, group discussion, expert review, eye tracking, and web probing. [1]

Conventional pretesting-This is similar to a rehearsal that tries to imitate and model after what the real test or interview will be like. A simulation of real test or interview that takes place prior to the real one. Whatever method used in the actual interview or test should be used in this method of pretesting. [1] [8]

Cognitive pretesting (cognitive interviewing)- very similar to conventional pretesting. However, the participants are actively being asked about the questions as they take the test. It's conducted during the interview or test. [1] [6]

They can also be presented in multiple different ways including: written surveys, oral surveys, electronic surveys [4]

Techniques

There are certain techniques that the interviewer implements in cognitive pretesting to extract the information needed to ensure a good interview or questionnaire.

The think-aloud technique- This occurs when the interviewer asks the interviewee to vocalize their thoughts and how they came to their answer. This can be concurrent (during) or retrospective (after) the interview. [1] [2]

Probing technique- This occurs when the interviewer asks the interviewee one or more follow-up questions. They 'probe' about the questions asked, terminology used, or even the responses. [1] [2] Probes can be concurrent (during the task but not to be disruptive of the task) or retrospective (after the task). [9]

Paraphrasing- This occurs when the interviewer asks the interviewee to use their own words to repeat the question. This tests to make sure the questions are understandable. [1]

Confidence rating- This occurs when the interviewer asks the interviewee about their confidence in how correctly they answered the question. [1]

Sorting or Card Sorting- This occurs when the interviewer asks the interviewee or tries to understand how the interviewee categorizes certain situations or even terms. [1] [9]

Vignettes- These are short descriptions of one or more hypothetical characters (similar to vignette used in psychological and sociological experiments or anchoring vignettes in quantitative survey research [10] ) and are used to investigate the respondent's cognitive processing with regard to their survey-relevant decisions. [9] [11]

Web probing- This technique implements cognitive interview probing techniques in web surveys. Its strengths include standardization, anonymity, and large and fast coverage because it is administered via the web. However, web probing can only reach online population groups, there is probe nonresponse, and insufficient probe answers from a content perspective cannot be followed up. [12] [13]

Participants and recruitment

Sample size is a very important topic in pretests. Small samples of 5-15 participants are common. While some researchers suggest that it is best if the sample size is at least 30 people and more is always better, [14] the current best practice is to design the research in rounds to retest changes. For example, when pretesting a questionnaire, it is more useful to conduct 3 rounds of 9 participants than 1 round of 27. [9]

There are two different methods of telling participants about the questionnaire: participating pretests and undeclared pretest. [4]

Cross-cultural research

When conducting cognitive interviews in non-English languages, recent research recommend not restricting sample selection and recruitment to non-English speaking monolinguals, which was a common practice by survey researchers. [15] [16] When recruiting hard-to-reach respondents and respondent characteristics via purposive sampling, community-based recruitment (word of mouth, endorsement from community leaders) works better than advertisements. [17] [18] [19]

Use by survey researchers

Cognitive interviewing is regularly practiced by U.S. Federal Agencies, including the Census Bureau, [20] [21] National Center for Health Statistics (NCHS), [22] and the Bureau of Labor Statistics. [23] The NCHS maintains a database of U.S. and international agencies that have conducted cognitive interview projects and contributed reports to their depository, such as the National Science Foundation and GESIS – Leibniz Institute for the Social Sciences. [24]

Cross-cultural cognitive interviewing is practiced to evaluate survey question equivalence and sources of difficulties, as well as to repair problems related to translation. [25] [26] Because of differences in communication styles and cultural norms, adaptations are needed in protocol setup [27] and design, [28] use of vignettes, [11] and verbal probing. [29]

Standards

In October 2016, the U.S. Office of Management and Budget (OMB) issued Statistical Policy Directive No. 2 Addendum: Standards and Guidelines for Cognitive Interviews that included seven standards for cognitive interviews conducted by or for U.S. Federal studies. Another standard proposed by researchers is the Cognitive Interviewing Reporting Framework (CIRF) that applies a 10-category checklist to make clear what was done during the cognitive interviews and how conclusions were made based on procedures and results of those interviews. [30] In addition, a project management approach is recommended when managing cognitive interviewing studies. [31] For translated surveys, cognitive interviewing techniques, participant selection and recruitment, and project management approach must be adapted to increase their fit for use. [28]

Related Research Articles

Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

Concept testing is the process of using surveys to evaluate consumer acceptance of a new product idea prior to the introduction of a product to the market. It is important not to confuse concept testing with advertising testing, brand testing and packaging testing, as is sometimes done. Concept testing focuses on the basic product idea, without the embellishments and puffery inherent in advertising.

<span class="mw-page-title-main">Interview</span> Structured series of questions and answers

An interview is a structured conversation where one participant asks questions, and the other provides answers. In common parlance, the word "interview" refers to a one-on-one conversation between an interviewer and an interviewee. The interviewer asks questions to which the interviewee responds, usually providing information. That information may be used or provided to other audiences immediately or later. This feature is common to many types of interviews – a job interview or interview with a witness to an event may have no other audience present at the time, but the answers will be later provided to others in the employment or investigative process. An interview may also transfer information in both directions.

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Qualitative marketing research involves a natural or observational examination of the philosophies that govern consumer behavior. The direction and framework of the research is often revised as new information is gained, allowing the researcher to evaluate issues and subjects in an in-depth manner. The quality of the research produced is heavily dependent on the skills of the researcher and is influenced by researcher bias.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

A structured interview is a quantitative research method commonly employed in survey research. The aim of this approach is to ensure that each interview is presented with exactly the same questions in the same order. This ensures that answers can be reliably aggregated and that comparisons can be made with confidence between sample sub groups or between different survey periods.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.

<span class="mw-page-title-main">Unstructured interview</span> Interview in which questions are not prearranged.

An unstructured interview or non-directive interview is an interview in which questions are not prearranged. These non-directive interviews are considered to be the opposite of a structured interview which offers a set amount of standardized questions. The form of the unstructured interview varies widely, with some questions being prepared in advance in relation to a topic that the researcher or interviewer wishes to cover. They tend to be more informal and free flowing than a structured interview, much like an everyday conversation. Probing is seen to be the part of the research process that differentiates the in-depth, unstructured interview from an everyday conversation. This nature of conversation allows for spontaneity and for questions to develop during the course of the interview, which are based on the interviewees' responses. The chief feature of the unstructured interview is the idea of probe questions that are designed to be as open as possible. It is a qualitative research method and accordingly prioritizes validity and the depth of the interviewees' answers. One of the potential drawbacks is the loss of reliability, thereby making it more difficult to draw patterns among interviewees' responses in comparison to structured interviews. Unstructured interviews are used in a variety of fields and circumstances, ranging from research in social sciences, such as sociology, to college and job interviews. Fontana and Frey have identified three types of in depth, ethnographic, unstructured interviews - oral history, creative interviews, and post-modern interviews.

An expert report is a study written by one or more authorities that states findings and offers opinions.

Linguistic validation is the process of investigating the reliability, conceptual equivalence, and content validity of translations of patient-reported outcome (PRO) measures.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.

<span class="mw-page-title-main">Computer-assisted survey information collection</span>

Computer-assisted survey information collection (CASIC) refers to a variety of survey modes that were enabled by the introduction of computer technology. The first CASIC modes were interviewer-administered, while later on computerized self-administered questionnaires (CSAQ) appeared. It was coined in 1990 as a catch-all term for survey technologies that have expanded over time.

A translation project is a project that deals with the activity of translating.

<span class="mw-page-title-main">Interview (research)</span> Research technique

An interview in qualitative research is a conversation where questions are asked to elicit information. The interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers. They can be contrasted with focus groups in which an interviewer questions a group of people and observes the resulting conversation between interviewees, or surveys which are more anonymous and limit respondents to a range of predetermined answer choices. In addition, there are special considerations when interviewing children. In phenomenological or ethnographic research, interviews are used to uncover the meanings of central themes in the life world of the subjects from their own point of view.

A 24-hour diet recall is a dietary assessment tool that consists of a structured interview in which participants are asked to recall all food and drink they have consumed in the previous 24 hours. It may be self-administered.

References

  1. 1 2 3 4 5 6 7 8 9 10 Lenzner, Timo; Neuert, Cornelia; Otto, Wanda (2016). "Kognitives Pretesting". Gesis Survey Guidelines. doi:10.15465/gesis-sg_en_010.
  2. 1 2 3 Tilley, Barbara C.; LaPelle, Nancy R.; Goetz, Christopher G.; Stebbins, Glenn T. (2014). "Using Cognitive Pretesting in Scale Development for Parkinson's Disease: The Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) Example". Journal of Parkinson's Disease. 4 (3): 395–404. doi:10.3233/JPD-130310. ISSN   1877-7171. PMC   5086096 . PMID   24613868.
  3. "Pretesting - Cross-Cultural Survey Guidelines". ccsg.isr.umich.edu. Retrieved 2020-07-18.
  4. 1 2 3 4 5 6 "Writing@CSU". writing.colostate.edu. Retrieved 2020-07-18.
  5. Grimm, Pamela (2010), "Pretesting a Questionnaire", Wiley International Encyclopedia of Marketing, American Cancer Society, doi:10.1002/9781444316568.wiem02051, ISBN   978-1-4443-1656-8
  6. 1 2 3 4 Babonea, Alina-Mihaela; Voicu, Mirela-Cristina (April 2011). "Questionnaires pretesting in marketing research". Challenges of the Knowledge Society. Romania: Nicolae Titulescu University Publishing House. 1: 1323–1330. ISSN   2068-7796.
  7. "GESIS - Leibniz Institute for the Social Sciences". www.gesis.org. Retrieved 2020-07-18.
  8. Hu, Shu (2014), "Pretesting", in Michalos, Alex C. (ed.), Encyclopedia of Quality of Life and Well-Being Research, Dordrecht: Springer Netherlands, pp. 5048–5052, doi:10.1007/978-94-007-0753-5_2256, ISBN   978-94-007-0753-5
  9. 1 2 3 4 Willis, Gordon (2005). Cognitive interviewing: A tool for improving questionnaire design. Sage. p. 146. ISBN   9780761928041.
  10. "Anchoring Vignettes Overview". gking.harvard.edu. Retrieved 2023-10-26.
  11. 1 2 Sha, Mandy (2016-08-01). "The Use of Vignettes in Evaluating Asian Language Questionnaire Items". Survey Practice. 9 (3). doi: 10.29115/SP-2016-0013 .
  12. "Web Probing". GESIS - Leibniz Institute for the Social Sciences. Retrieved 2023-10-24.
  13. Fowler, Stephanie; B. Willis, Gordon (2020-01-02), Beatty, Paul; Collins, Debbie; Kaye, Lyn; Padilla, Jose Luis (eds.), "The Practice of Cognitive Interviewing Through Web Probing", Advances in Questionnaire Design, Development, Evaluation and Testing (1 ed.), Wiley, pp. 451–469, doi:10.1002/9781119263685.ch18, ISBN   978-1-119-26362-3 , retrieved 2023-10-24
  14. Perneger, Thomas V.; Courvoisier, Delphine S.; Hudelson, Patricia M.; Gayet-Ageron, Angèle (2015-01-01). "Sample size for pre-tests of questionnaires". Quality of Life Research. 24 (1): 147–151. doi:10.1007/s11136-014-0752-2. ISSN   1573-2649. PMID   25008261. S2CID   22314144.
  15. Park, Hyunjoo; Sha, M. Mandy; Willis, Gordon (November 2016). "Influence of English-language Proficiency on the Cognitive Processing of Survey Questions". Field Methods. 28 (4): 415–430. doi:10.1177/1525822X16630262. ISSN   1525-822X.
  16. Goerman, Patricia L.; Meyers, Mikelyn; Sha, Mandy (2018-10-12), Johnson, Timothy P.; Pennell, Beth‐Ellen; Stoop, Ineke A.L.; Dorer, Brita (eds.), "Working Toward Comparable Meaning of Different Language Versions of Survey Instruments: Do Monolingual and Bilingual Cognitive Testing Respondents Help to Uncover the Same Issues?", Advances in Comparative Survey Methods (1 ed.), Wiley, pp. 251–269, doi:10.1002/9781118884997.ch12, ISBN   978-1-118-88498-0 , retrieved 2023-10-24
  17. Sha, M. Mandy; Park, Hyunjoo; Liu, Lu (2013-10-01). "Exploring the efficiency and utility of methods to recruit non-English speaking qualitative research participants". Survey Practice. 6 (3). doi: 10.29115/SP-2013-0015 .
  18. Park, Hyunjoo; Sha, M. Mandy (2014-06-01). "Evaluating the Efficiency of Methods to Recruit Asian Research Participants". Journal of Official Statistics. 30 (2): 335–354. doi: 10.2478/jos-2014-0020 .
  19. Sha, Mandy; Moncada, Jennifer (2017-06-01). "Successful Techniques to Recruit Hispanic and Latino Research Participants". Survey Practice. 10 (3). doi: 10.29115/SP-2017-0014 .
  20. Virgile, M.; Katz, J.; Tuttle, D.; Terry, R.; Graber, J. (2019). "Cognitive Pretesting of 2019 American Housing Survey Modules". United States Census Bureau. Archived from the original on 2020-08-08. Retrieved 2020-05-20.
  21. Childs, Jennifer; Sha, Mandy; Peytcheva, Emilia. "Cognitive Testing of the Targeted Coverage Follow-up (TCFU) Interview". Census Working Papers. Retrieved October 4, 2023.
  22. "Q-Bank: Question Evaluation for Surveys". wwwn.cdc.gov. Retrieved 2023-11-05.
  23. K, Schwartz, Lisa. "The American Time Use Survey: cognitive pretesting : Monthly Labor Review: U.S. Bureau of Labor Statistics". www.bls.gov. Retrieved 2020-05-27.{{cite web}}: CS1 maint: multiple names: authors list (link)
  24. "Explore Reports by Agency - Q-Bank". wwwn.cdc.gov. Retrieved 2023-11-05.
  25. Willis, Gordon (May 2, 2015). "The Practice of Cross-Cultural Cognitive Interviewing". Public Opinion Quarterly . 79 (S1): 359–395.
  26. Sha, Mandy; Aizpurua, Eva (2020). The essential role of language in survey research: Chapter 7 Pretesting methods in cross-cultural research. RTI Press. pp. 129–150. ISBN   978-1-934831-23-6.
  27. Park, Hyunjoo; Goerman, Patricia; Sha, Mandy (2017-06-01). "Exploring the Effects of Pre-interview Practice in Asian Language Cognitive Interviews". Survey Practice. 10 (3). doi: 10.29115/SP-2017-0019 .
  28. 1 2 Sha, Mandy; Pan, Yuling (2013-12-01). "Adapting and Improving Methods to Manage Cognitive Pretesting of Multilingual Survey Instruments". Survey Practice. 6 (4). doi: 10.29115/SP-2013-0024 .
  29. Mneimneh, Zeina Nazih (2018-07-25). Sha, Mandy; Behr, Dorothée (eds.). "Probing for sensitivity in translated survey questions: Differences in respondent feedback across cognitive probe types". Translation & Interpreting. 10 (special issue on translation of questionnaires in cross-national and cross-cultural research): 73–88. ISSN   1836-9324.
  30. Boeije, Hennie; Willis, Gordon (August 2013). "The Cognitive Interviewing Reporting Framework (CIRF)". Methodology. 9 (3): 87–95. doi:10.1027/1614-2241/a000075. ISSN   1614-1881.
  31. Sha, Mandy; Childs, Jennifer Hunter (2014-08-01). "Applying a project management approach to survey research projects that use qualitative methods". Survey Practice. 7 (4). doi: 10.29115/SP-2014-0021 .