Computer-assisted web interviewing

Last updated

Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.

Contents

Using online questionnaires

There are several reasons why someone would utilize online questionnaires as their preferred testing method. A few of the advantages and disadvantages of this method have been summarized below: [1] [2]

Advantages

Disadvantages

Questionnaire design

An online questionnaire needs to be carefully thought through before it is launched. There are several important paradigms that should be considered when creating an online questionnaire. [1]

Collection and prioritization of data

Online questionnaire format

Prototyping

Question formats

In designing a questionnaire, the evaluation method should be kept in mind when choosing the response format. In this section, there are various response formats that can be used on online questionnaires. [1]

Radio buttons

RadioButton.jpg

The respondent is required to click on the circle, which corresponds to the desired answer. A dot in the middle will appear once an answer is chosen. Only one answer can be chosen. [1]

  • Recommended when the choice of answers are mutually exclusive.
  • No default answer should be provided. If a default answer is provided, it may be mistaken as an answer if the respondent chooses to skip the question.
  • Require precision in clicking. [1]

Check boxes

CheckBox5.jpg

The respondent is required to click on the box next to the answer that corresponds to the desired choice. A checkmark will appear in the box once an answer is chosen. More than one answer can be selected. [1]

  • If there are many options, a simple matrix is recommended. [7]
  • When using check boxes, if more than one answer can be checked, it should be specified in the instructions. [7]
  • If “none of the above” is required, provide it with a radio button to prevent an erroneous check on this choice in case another answer has been chosen. [1]

DropDown5.jpg

The respondent is required to click on the arrow on the far right side of the box. [1] Once clicked, a display with a list of answers will appear. [3] A scroll bar may appear on the right hand side if a large number of answers are displayed. [3] The respondent can click on the highlighted section of the list to select an answer. [1] This answer will then appear in the box. Only one answer can be selected for this type of question. [1]

  • Good option for long lists such as state/country of residence. [3]
  • Should be avoided for items where typing is faster. For example, year of birth. [3]
  • In designing drop-down boxes, do not make the first option visible to the respondent. This may be misleading where no answer may be chosen. [7]

Open-ended questions

Open-ended questions are those that allow respondents to answer in their own words. In an online survey, textboxes are provided with the question prompt in order for respondents to type in their answer. Open-ended questions seek a free response and aim to determine what is at the tip of the respondent's mind. These are good to use when asking for attitude or feelings, likes and dislikes, memory recall, opinions, or additional comments. [10]

The respondent is required to click inside of the text box to get the cursor inside the box. Once the cursor is blinking inside of the box, the answer of the question can be typed in. [7]

  • Make the size of the text box according to the desired and required amount of information from the respondent. [1]
  • Provide concise and clear input instructions.

Rating scales

The respondent must select one value from a scale of possible options; for example, poor, fair, good, or excellent. Rating scales allow the person conducting the survey to measure and compare sets of variables.

  • If using rating scales, be consistent throughout the survey. Use the same number of points on the scale and make sure meanings of high and low stay consistent throughout the survey.
  • Use an odd number in the rating scale to make data analysis easier. Switching the rating scales around will confuse survey takers, which will lead to untrustworthy responses.
  • Limit the number of items in ranking or rating scale questions to fewer than ten. These questions can become difficult to read after ten options. Longer rating or ranking questions can also cause display issues in some environments.

Online questionnaires

Responses

Response rates are frequently quite low[ citation needed ] and there is a danger that they will continue to drop due to over-surveying of web-users.

Jon Krosnick argues that the following three factors determine the successfulness of the questionnaire and the likelihood of achieving decent levels of response.

  1. Respondent ability
  2. Respondent motivation
  3. Task difficulty/questionnaire design [11]

Bosnjak and Tuten argue that there are at least seven ways in which online surveys are responded to. [12]

They establish the following typology

  1. Complete responders are those respondents who view all questions and answer all questions.
  2. Unit nonresponders are those individuals who do not participate in the survey. There are two possible variations to the unit nonresponder. Such an individual could be technically hindered from participation, or he or she may purposefully withdraw after the welcome screen is displayed, but prior to viewing any questions.
  3. Answering Drop-Outs consist of individuals who provide answers to those questions displayed, but quit prior to completing the survey.
  4. Lurkers view all of the questions in the survey, but do not answer any of the questions.
  5. Lurking drop-outs represent a combination of 3 and 4. Such a participant views some of the questions without answering, but also quits the survey prior to reaching the end.
  6. Item non-responders view the entire questionnaire, but only answer some of the questions.
  7. Item non-responding drop-outs represent a mixture of 3 and 6. Individuals displaying this response behavior view some of the questions, answer some but not all of the questions viewed, and also quit prior to the end of the survey.

Administration

Once the questionnaire is designed, it must be administered to the appropriate sample population for data collection. [8] Attracting the appropriate target audience often requires advertisement. There are various methods used to attract participants

This usually helps in attracting willing participants which ultimately provide better quality data as opposed to reluctant participants.

Location of administration for the online questionnaire may be a factor in the administration if a specific environment is required. [1] A quiet environment may be needed for questions, which require a certain amount of concentration. [13] The questionnaire may need to be administered in a secluded environment to protect sensitive information provided by the participant. [9] Security measures in the software may also need to be added in these cases. [5] In contrast, online questionnaires may also be very informal and relaxed and can be conducted in the comfort of someone’s home. [1]

Quality

Questionnaire quality can be measured through the value of the data obtained and participant satisfaction. [1] To maintain a high quality questionnaire length, conciseness and question sequence should be considered. [13] First, questionnaires should only be as long as they need to be. [3] [4] Conciseness can be achieved through removing redundant and irrelevant questions, which can add frustration to the participant, but not value to the research. [8] Finally, placing questions in a logical sequence also gives the participants a better mental map as they are filling out the questionnaire. [3] Moving randomly between subjects and having answers in a non-intuitive sequence can confuse the participant. [1]

Ethics

Ethical issues should be considered when gathering data from a target audience. Below are common things one should keep in mind when considering the rights and interests of the participant. [1]

See also

Related Research Articles

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

<span class="mw-page-title-main">Likert scale</span> Psychometric measurement scale

A Likert scale is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

Computer-assisted telephone interviewing (CATI) is a telephone surveying technique in which the interviewer follows a script provided by a software application. It is a structured system of microdata collection by telephone that speeds up the collection and editing of microdata and also permits the interviewer to educate the respondents on the importance of timely and accurate data. The software is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is used in B2B services and corporate sales.

Computer-assisted personal interviewing (CAPI) is an interviewing technique in which the respondent or interviewer uses an electronic device to answer the questions. It is similar to computer-assisted telephone interviewing, except that the interview takes place in person instead of over the telephone. This method is usually preferred over a telephone interview when the questionnaire is long and complex. It has been classified as a personal interviewing technique because an interviewer is usually present to serve as a host and to guide the respondent. If no interviewer is present, the term Computer-Assisted Self Interviewing (CASI) may be used. An example of a situation in which CAPI is used as the method of data collection is the British Crime Survey.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

<span class="mw-page-title-main">Response rate (survey)</span>

In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage. The term is also used in direct marketing to refer to the number of people who responded to an offer.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Cognitive pretesting, or cognitive interviewing, is a field research method where data is collected on how the subject answers interview questions. It is the evaluation of a test or questionnaire before it's administered. It allows survey researchers to collect feedback regarding survey responses and is used in evaluating whether the question is measuring the construct the researcher intends. The data collected is then used to adjust problematic questions in the questionnaire before fielding the survey to the full sample of people.

Audience response is a type of interaction associated with the use of audience response systems, to create interactivity between a presenter and its audience.

Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.

Real-time Delphi (RTD) is an advanced form of the Delphi method. The advanced method "is a consultative process that uses computer technology" to increase efficiency of the Delphi process.

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes, responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys.

<span class="mw-page-title-main">Computer-assisted survey information collection</span>

Computer-assisted survey information collection (CASIC) refers to a variety of survey modes that were enabled by the introduction of computer technology. The first CASIC modes were interviewer-administered, while later on computerized self-administered questionnaires (CSAQ) appeared. It was coined in 1990 as a catch-all term for survey technologies that have expanded over time.

<span class="mw-page-title-main">Jon Krosnick</span>

Jon Alexander Krosnick is a professor of Political Science, Communication, and Psychology, and director of the Political Psychology Research Group (PPRG) at Stanford University. Additionally, he is the Frederic O. Glover Professor in Humanities and Social Sciences and an affiliate of the Woods Institute for the Environment. Krosnick has served as a consultant for government agencies, universities, and businesses, has testified as an expert in court proceedings, and has been an on-air television commentator on election night.

A salary survey is a tool specifically for remuneration specialists and managers to define a fair and competitive salary for the employees of a company. The survey output is data on the average or median salary for a specific position, taking into consideration the region, industry, company size, etc. Input data is aggregated directly from an employer or employee.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sharp, H., Rogers, Y., Preece, J., Interaction Design: Beyond Human-Computer Interaction. John Wiley & Sons, Inc. 2002
  2. Reips, U.-D. (2000). The Web Experiment Method: Advantages, disadvantages, and solutions. In M. H. Birnbaum (Ed.), Psychological experiments on the Internet (pp. 89-118). San Diego, CA: Academic Press.
  3. 1 2 3 4 5 6 7 8 9 10 Bradburn, Norman M., Sudman, Seymour, Wansink, Brian, Asking Questions: The Definitive Guide to Questionnaire Design – For Market Research, Political Polls, and Social and Health Questionnaires. Jossey-Bass. 2004
  4. 1 2 Shatz, Itamar (2017). "Fast, free, and targeted: Reddit as a source for recruiting participants online" (PDF). Social Science Computer Review. 35 (4): 537–549. doi:10.1177/0894439316650163. S2CID   64146439.
  5. 1 2 3 4 5 Online Questionnaire Design Guide, "Web Based Questionnaires" [cited Mar 10, 2007]. Available HTTP [ permanent dead link ]
  6. StatPac, "Questionnaire Design - General Considerations" [cited Feb 24, 2007]. Available HTTP
  7. 1 2 3 4 5 6 7 Presser, Stanley, Rothgeb, Jennifer M., Couper, Mick P., Lessler, Judith T., Martin, Elizabeth, Martin, Jean, Singer, Eleanor, Methods for Testing and Evaluating Questionnaire Questionnaires. John Wiley & Sons, Inc. 2004
  8. 1 2 3 Groves, Robert M., Fowler, Floyd J., Couper, Mick P., Lepkowski, James M., Singer, Eleanor, Tourangeau, Roger, Questionnaire Methodology. John Wiley & Sons, Inc. 2004 exactly
  9. 1 2 3 4 National Research Council of Canada, "Online Questionnaire Design" [cited Mar 10, 2007]. Available HTTP Archived 2007-05-15 at the Wayback Machine
  10. Survey Monkey, Smart Survey Design (2007), http://s3.amazonaws.com/SurveyMonkeyFiles/SmartSurvey.pdf
  11. See "Jon Krosnick". Archived from the original on 2007-08-19. Retrieved 2008-02-20. for a list of Krosnick's publications
  12. Bosnjak, M. and Tuten, T. L. (2001) Classifying response behaviors in web-based surveys, Journal of Computer-Mediated Communication, 6, 3. http://jcmc.indiana.edu/vol6/issue3/boznjak.html
  13. 1 2 3 Couper, Mick P., Baker, Reginald P., Clark, Cynthia Z. F., Martin, Jean, Nicholls, William L., O'Reilly, James M., Computer Assisted Survey Information Collection. John Wiley & Sons, Inc. 1998

Sources