Participation bias

Last updated

Participation bias or non-response bias is a phenomenon in which the results of elections, studies, polls, etc. become non-representative because the participants disproportionately possess certain traits which affect the outcome. These traits mean the sample is systematically different from the target population, potentially resulting in biased estimates. [1]

Contents

For instance, a study found that those who refused to answer a survey on AIDS tended to be "older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure." [2] It may occur due to several factors as outlined in Deming (1990). [3]

Non-response bias can be a problem in longitudinal research due to attrition during the study. [4]

Example

If one selects a sample of 1000 managers in a field and polls them about their workload, the managers with a high workload may not answer the survey because they do not have enough time to answer it, and/or those with a low workload may decline to respond for fear that their supervisors or colleagues will perceive them as surplus employees (either immediately, if the survey is non-anonymous, or in the future, should their anonymity be compromised). Therefore, non-response bias may make the measured value for the workload too low, too high, or, if the effects of the above biases happen to offset each other, "right for the wrong reasons." For a simple example of this effect, consider a survey that includes, "Agree or disagree: I have enough time in my day to complete a survey."

In the 1936 U.S. presidential election, The Literary Digest mailed out 10 million questionnaires, of which 2.4 million were returned. Based on these, they predicted that Republican Alf Landon would win with 370 of 531 electoral votes, whereas he only got eight. Research published in 1976 and 1988 concluded that non-response bias was the primary source of this error, although their sampling frame was also quite different from the vast majority of voters. [1] Non responders have been shown to be associated with younger patients, poorer communities and those who are less satisfied and subsequently could be a source of bias by a study published by Imam et al. in 2014.[ citation needed ]

Test

There are different ways to test for non-response bias. A common technique involves comparing the first and fourth quartiles of responses for differences in demographics and key constructs. [5] In e-mail surveys some values are already known from all potential participants (e.g. age, branch of the firm, ...) and can be compared to the values that prevail in the subgroup of those who answered. If there is no significant difference this is an indicator that there might be no non-response bias.

In e-mail surveys those who didn't answer can also systematically be phoned and a small number of survey questions can be asked. If their answers don't differ significantly from those who answered the survey, there might be no non-response bias. This technique is sometimes called non-response follow-up.

Generally speaking, the lower the response rate, the greater the likelihood of a non-response bias in play.

See also

Related Research Articles

In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population have a lower or higher sampling probability than others. It results in a biased sample of a population in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.

In statistics, survey sampling describes the process of selecting a sample of elements from a target population to conduct a survey. The term "survey" may refer to many different types or techniques of observation. In survey sampling it most often involves a questionnaire used to measure the characteristics and/or attitudes of people. Different ways of contacting members of a sample once they have been selected is the subject of survey data collection. The purpose of sampling is to reduce the cost and/or the amount of work that it would take to survey the entire target population. A survey that measures the entire target population is called a census. A sample refers to a group or section of a population from which information is to be obtained

Sampling (statistics) Selection of data points in statistics.

In statistics, quality assurance, and survey methodology, sampling is the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population in question. Sampling has lower costs and faster data collection than measuring the entire population and can provide insights in cases where it is infeasible to sample an entire population.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Opinion poll Type of survey

An opinion poll, often simply referred to as a poll or a survey, is a human research survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals. A person who conducts polls is referred to as a pollster.

Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.

A straw poll, straw vote, or straw ballot is an ad hoc or unofficial vote. It is used to show the popular opinion on a certain matter, and can be used to help politicians know the majority opinion and help them decide what to say in order to gain votes.

In Internet culture, a lurker is typically a member of an online community who observes, but does not participate. The exact definition depends on context. Lurkers make up a large proportion of all users in online communities. Lurking allows users to learn the conventions of an online community before they participate, improving their socialization when they eventually "de-lurk". However, a lack of social contact while lurking sometimes causes loneliness or apathy among lurkers.

Questionnaire Series of questions for gathering information

A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents through survey or statistical study. The questionnaire was invented by the Statistical Society of London in 1838.

Response bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

In sociology and statistics research, snowball sampling is a nonprobability sampling technique where existing study subjects recruit future subjects from among their acquaintances. Thus the sample group is said to grow like a rolling snowball. As the sample builds up, enough data are gathered to be useful for research. This sampling technique is often used in hidden populations, such as drug users or sex workers, which are difficult for researchers to access. As sample members are not selected from a sampling frame, snowball samples are subject to numerous biases. For example, people who have many friends are more likely to be recruited into the sample. When virtual social networks are used, then this technique is called virtual snowball sampling.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

Response rate (survey)

In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage. The term is also used in direct marketing to refer to the number of people who responded to an offer.

An open-access poll is a type of opinion poll in which a nonprobability sample of participants self-select into participation. The term includes call-in, mail-in, and some online polls.

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data.

Automated telephone surveys is a systematic collection a data from demography by making calls automatically to the preset list of respondents at the aim of collecting information and gain feedback via the telephone and the internet. Automated surveys are used for customer research purposes by call centres for customer relationship management and performance management purposes. They are also used for political polling, market research and job satisfaction surveying.

Unstructured interview Interview in which questions are not prearranged.

An unstructured interview or non-directive interview is an interview in which questions are not prearranged. These non-directive interviews are considered to be the opposite of a structured interview which offers a set amount of standardized questions. The form of the unstructured interview varies widely, with some questions being prepared in advance in relation to a topic that the researcher or interviewer wishes to cover. They tend to be more informal and free flowing than a structured interview, much like an everyday conversation. Probing is seen to be the part of the research process that differentiates the in-depth, unstructured interview from an everyday conversation. This nature of conversation allows for spontaneity and for questions to develop during the course of the interview, which are based on the interviewees' responses. The chief feature of the unstructured interview is the idea of probe questions that are designed to be as open as possible. It is a qualitative research method and accordingly prioritizes validity and the depth of the interviewees' answers. One of the potential drawbacks is the loss of reliability, thereby making it more difficult to draw patterns among interviewees' responses in comparison to structured interviews. Unstructured interviews are used in a variety of fields and circumstances, ranging from research in social sciences, such as sociology, to college and job interviews. Fontana and Frey have identified three types of in depth, ethnographic, unstructured interviews - oral history, creative interviews, and post-modern interviews.

In survey sampling, total survey error includes all forms of survey error including sampling variability, interviewer effects, frame errors, response bias, and non-response bias. Total survey error is discussed in detail in many sources including Salant and Dillman.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys.

References

  1. 1 2 Fowler, Floyd (2009). Survey Research Methods (4th ed.) - SAGE Research Methods. doi:10.4135/9781452230184. ISBN   9781412958417.
  2. "Participation Bias in AIDS-Related Telephone Surveys: Results From the National AIDS Behavioral Survey (NABS) Non-Response Study".
  3. Deming, W. Edwards. Sample design in business research. Vol. 23. John Wiley & Sons, 1990.
  4. Ann, Bowling (2014-07-01). Research methods in health : Investigating health and health services. Milton Keynes. ISBN   9780335262755. OCLC   887254158.
  5. Armstrong, J.S.; Overton, T. (1977). "Estimating Nonresponse Bias in Mail Surveys". Journal of Marketing Research. 14 (3): 396–402. CiteSeerX   10.1.1.36.7783 . doi:10.2307/3150783. JSTOR   3150783.

Further reading