Mode effect

Last updated

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes (e.g. paper and telephone), responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

Contents

Theory

Particular survey modes put respondents into different frames of mind, referred to as a mental "script". [1] This can affect the results they give. For example:

Mode effects are likely to be larger when the differences between modes are larger [ citation needed ]. Face-to-face interviews are substantially different from self-completed pen-and-paper forms. By contrast, web-surveys, pen-and-paper and other self-completed forms are quite similar (each requiring respondents to read and privately respond to a question) and therefore mode effects may be minimised.

Users of surveys must consider the potential for mode effects when comparing results from studies in different modes. However, this is difficult as mode effects can be complex and subject to interactions between respondent demographics, subject matter and mode. Unless the mode effects are formally investigated for the survey instrument, it is difficult to quantify their size and qualitative judgments by experts familiar with the subject matter and respective modes are required instead.

Social desirability bias

Studies of mode effects are sometimes contradictory but some general patterns do emerge. For example, social desirability bias tends to be highest for telephone surveys and lowest for web surveys: [2] [3]

  1. Telephone surveys
  2. Face-to-face surveys
  3. IVR surveys
  4. Mail surveys
  5. Web surveys

Therefore, as the data collected on sensitive topics (such as sexual behavior or illicit activities) will change depending on the administration mode, researchers should be cautious of combining data or comparing results from different modes.

Differences in questions between modes

Some modes require different question wording from others, in order to suit the features of the mode. For example, self-complete forms can use lists of examples or extensive instructions to help respondents answer relatively complex questions. By contrast, in telephone interviews, respondents are often limited by their working memory and are unlikely to understand a long question with multiple sub-clauses. Another example is a 'matrix' of questions, commonly found on self-complete forms, cannot be read out easily in a verbal interview; rather a matrix would generally need to be scripted as a series of individual questions.

Differences in question wording across modes may cause different data to be collected by different modes. However, this is not always the case, and appropriate adaptation of questions to a new mode can yield comparable data [ citation needed ]. Survey designers should consider the conventions of the mode when adapting questions. For example, while it may be acceptable to require respondents to calculate total figures themselves in a paper form, respondents may perceive it to be burdensome if this is required in a web form (where respondents might expect totals to be calculated automatically by the computer). This may in turn change their attitude toward the form, altering their behaviour and ultimately changing the data collected.

Identifying and resolving mode effects

Mode effects can be identified by embedding an experiment within the survey, where a proportion of respondents are allocated to each mode. Differences in results from each mode should identify the 'mode effect' for this particular survey.

Once a mode effect has been quantified, it may be possible to use this information to reprocess existing data and allow comparison between data collected in different modes (e.g. by backcasting a time series to determine what past results 'would have' been had they been administered in the new mode).

Differential coverage between modes

Different administration modes may inherently exclude some parts of the target population. This potentially biases the sample that is taken, and changes the data from what would have been collected using another mode. For example, people without a home phone are excluded from Random Digit Dialling (RDD) surveys, and people without internet access are unlikely to complete a web survey. This means different samples are taken from the population when using different modes. Unless experiments are specifically designed to investigate differential coverage, mode effects will be confounded by coverage [4] , and significant differences between modes/experimental conditions could have several explanations:

This problem is exacerbated when in 'live' administration of a survey, multiple modes are used. Some surveys use multiple modes, allowing respondents to choose the most convenient method for them. That is, different 'types' of respondents are expected to complete different modes based on their own choices. In this case, mode effects are difficult to quantify as randomly allocating respondents to a condition does not reflect their preference. Such an experiment lacks external validity and results would not directly generalise to situations offering respondents a choice. Failing to randomly allocate participants to a condition (i.e. allowing them to have a choice, thereby retaining external validity) would mean apparent differences between modes reflect the combined effect of a) different respondent types choosing each mode and b) any mode effects.

Related Research Articles

Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. The term satisficing, a portmanteau of satisfy and suffice, was introduced by Herbert A. Simon in 1956, although the concept was first posited in his 1947 book Administrative Behavior. Simon used satisficing to explain the behavior of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterized by computational intractability or a lack of information, both of which preclude the use of mathematical optimization procedures. He observed in his Nobel Prize in Economics speech that "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science".

<span class="mw-page-title-main">Sampling (statistics)</span> Selection of data points in statistics.

In statistics, quality assurance, and survey methodology, sampling is the selection of a subset, a statistical sample, of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population in question. Sampling has lower costs and faster data collection than measuring the entire population and can provide insights in cases where it is infeasible to measure an entire population.

<span class="mw-page-title-main">Interview</span> Structured series of questions and answers

An interview is a structured conversation where one participant asks questions, and the other provides answers. In common parlance, the word "interview" refers to a one-on-one conversation between an interviewer and an interviewee. The interviewer asks questions to which the interviewee responds, usually providing information. That information may be used or provided to other audiences immediately or later. This feature is common to many types of interviews – a job interview or interview with a witness to an event may have no other audience present at the time, but the answers will be later provided to others in the employment or investigative process. An interview may also transfer information in both directions.

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

An opinion poll, often simply referred to as a survey or a poll, is a human research survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals. A person who conducts polls is referred to as a pollster.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

Computer-assisted telephone interviewing (CATI) is a telephone surveying technique in which the interviewer follows a script provided by a software application. It is a structured system of microdata collection by telephone that speeds up the collection and editing of microdata and also permits the interviewer to educate the respondents on the importance of timely and accurate data. The software is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is used in B2B services and corporate sales.

Computer-assisted personal interviewing (CAPI) is an interviewing technique in which the respondent or interviewer uses an electronic device to answer the questions. It is similar to computer-assisted telephone interviewing, except that the interview takes place in person instead of over the telephone. This method is usually preferred over a telephone interview when the questionnaire is long and complex. It has been classified as a personal interviewing technique because an interviewer is usually present to serve as a host and to guide the respondent. If no interviewer is present, the term Computer-Assisted Self Interviewing (CASI) may be used. An example of a situation in which CAPI is used as the method of data collection is the British Crime Survey.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

<span class="mw-page-title-main">Response rate (survey)</span>

In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage. The term is also used in direct marketing to refer to the number of people who responded to an offer.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Automated telephone surveys is a systematic collection a data from demography by making calls automatically to the preset list of respondents at the aim of collecting information and gain feedback via the telephone and the internet. Automated surveys are used for customer research purposes by call centres for customer relationship management and performance management purposes. They are also used for political polling, market research and job satisfaction surveying.

Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.

An exit interview is a survey conducted with an individual who is separating from an organization or relationship. Most commonly, this occurs between an employee and an organization, a student and an educational institution, or a member and an association. An organization can use the information gained from an exit interview to assess what should be improved, changed, or remain intact. More so, an organization can use the results from exit interviews to reduce employee, student, or member turnover and increase productivity and engagement, thus reducing the high costs associated with turnover. Some examples of the value of conducting exit interviews include shortening the recruiting and hiring process, reducing absenteeism, improving innovation, sustaining performance, and reducing possible litigation if issues mentioned in the exit interview are addressed.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys.

<span class="mw-page-title-main">Computer-assisted survey information collection</span>

Computer-assisted survey information collection (CASIC) refers to a variety of survey modes that were enabled by the introduction of computer technology. The first CASIC modes were interviewer-administered, while later on computerized self-administered questionnaires (CSAQ) appeared. It was coined in 1990 as a catch-all term for survey technologies that have expanded over time.

The interviewer effect is the distortion of response to a personal or telephone interview which results from differential reactions to the social style and personality of interviewers or to their presentation of particular questions. The use of fixed-wording questions is one method of reducing interviewer bias. Anthropological research and case-studies are also affected by the problem, which is exacerbated by the self-fulfilling prophecy, when the researcher is also the interviewer it is also any effect on data gathered from interviewing people that is caused by the behavior or characteristics of the interviewer.

References

  1. Groves, Robert M. (1989). Survey Errors and Survey Costs, New York: Wiley-Interscience.
  2. Frauke Kreuter, Stanley Presser, and Roger Tourangeau. "Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity". Public Opin Q (2008) 72(5): 847–865 first published online January 26, 2009 doi : 10.1093/poq/nfn063
  3. Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick. "Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias". Public Opin Q (2003) 67 (1): 79–125. doi : 10.1086/346010.
  4. de Leeuw, E. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2): 233-255.