Self-report study

Last updated

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without any outside interference. [1] A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Contents

Self-report studies have validity problems. [2] Patients may exaggerate symptoms in order to make their situation seem worse, or they may under-report the severity or frequency of symptoms in order to minimize their problems. Patients might also simply be mistaken or misremember the material covered by the survey.

Questionnaires and interviews

Questionnaires are a type of self-report method which consist of a set of questions usually in a highly structured written form. Questionnaires can contain both open questions and closed questions and participants record their own answers. Interviews are a type of spoken questionnaire where the interviewer records the responses. Interviews can be structured whereby there is a predetermined set of questions or unstructured whereby no questions are decided in advance. The main strength of self-report methods are that they are allowing participants to describe their own experiences rather than inferring this from observing participants. Questionnaires and interviews are often able to study large samples of people fairly easy and quickly. They are able to examine a large number of variables and can ask people to reveal behaviour and feelings which have been experienced in real situations. However participants may not respond truthfully, either because they cannot remember or because they wish to present themselves in a socially acceptable manner. Social desirability bias can be a big problem with self-report measures as participants often answer in a way to portray themselves in a good light. Questions are not always clear and we do not know if the respondent has really understood the question we would not be collecting valid data. If questionnaires are sent out, say via email or through tutor groups, response rate can be very low. Questions can often be leading. That is, they may be unwittingly forcing the respondent to give a particular reply.

Unstructured interviews can be very time consuming and difficult to carry out whereas structured interviews can restrict the respondents’ replies. Therefore psychologists often carry out semi-structured interviews which consist of some pre-determined questions and followed up with further questions which allow the respondent to develop their answers.

Open and closed questions

Questionnaires and interviews can use open or closed questions or both.

Closed questions are questions that provide a limited choice (for example, a participant's age or their favorite type of football team), especially if the answer must be taken from a predetermined list. Such questions provide quantitative data, which is easy to analyze. However, these questions do not allow the participant to give in-depth insights.

Open questions are those questions that invite the respondent to provide answers in their own words and provide qualitative data. Although these types of questions are more difficult to analyze, they can produce more in-depth responses and tell the researcher what the participant actually thinks, rather than being restricted by categories.

Rating scales

One of the most common rating scales is the Likert scale. A statement is used and the participant decides how strongly they agree or disagree with the statements. For example the participant decides whether Mozzarella cheese is great with the options of "strongly agree", "agree", "undecided", "disagree", and "strongly disagree". One strength of Likert scales is that they can give an idea about how strongly a participant feels about something. This therefore gives more detail than a simple yes no answer. Another strength is that the data are quantitative, which are easy to analyse statistically. However, there is a tendency with Likert scales for people to respond towards the middle of the scale, perhaps to make them look less extreme. As with any questionnaire, participants may provide the answers that they feel they should. Moreover, because the data is quantitative, it does not provide in-depth replies.

Fixed-choice questions

Fixed-choice questions are phrased so that the respondent has to make a fixed-choice answer, usually 'yes' or 'no'.

This type of questionnaire is easy to measure and quantify. It also prevents a participant from choosing an option that is not in the list. Respondents may not feel that their desired response is available. For example, a person who dislikes all alcoholic beverages may feel that it is inaccurate to choose a favorite alcoholic beverage from a list that includes beer, wine, and liquor, but does not include none of the above as an option. Answers to fixed-choice questions are not in-depth.

Reliability

Reliability refers to how consistent a measuring device is. A measurement is said to be reliable or consistent if the measurement can produce similar results if used again in similar circumstances. For example, if a speedometer gave the same readings at the same speed it would be reliable. If it didn't it would be pretty useless and unreliable. Importantly reliability of self-report measures, such as psychometric tests and questionnaires can be assessed using the split half method. This involves splitting a test into two and having the same participant doing both halves of the test.

Validity

Validity refers to whether a study measures or examines what it claims to measure or examine. Questionnaires are said to often lack validity for a number of reasons. Participants may lie; give answers that are desired and so on. A way of assessing the validity of self-report measures is to compare the results of the self-report with another self-report on the same topic. (This is called concurrent validity). For example if an interview is used to investigate sixth grade students' attitudes toward smoking, the scores could be compared with a questionnaire of former sixth graders' attitudes toward smoking.

Results of self-report studies have been confirmed by other methods. For example, results of prior self-reported outcomes were confirmed by studies involving smaller participant population using direct observation strategies. [3]

The overarching question asked regarding this strategy is, "Why would the researcher trust what people say about themselves?" [4] In case, however, when there is a challenge to the validity of collected data, there are research tools that can be used to address the problem of respondent bias in self-report studies. These include the construction of some inventories to minimize respondent distortions such as the use of scales to assess the attitude of the participant, measure personal bias, as well as identify the level of resistance, confusion, and insufficiency of self-reporting time, among others. [5] Leading questions could also be avoided, open questions could be added to allow respondents to expand upon their replies and confidentiality could be reinforced to allow respondents to give more truthful responses.

Disadvantages

Self-report studies have many advantages, but they also suffer from specific disadvantages due to the way that subjects generally behave. [6] Self-reported answers may be exaggerated; [7] respondents may be too embarrassed to reveal private details; various biases may affect the results, like social desirability bias. There are also cases when respondents guess the hypothesis of the study and provide biased responses that 1) confirm the researcher's conjecture; 2) make them look good; or, 3) make them appear more distressed to receive promised services. [5]

Subjects may also forget pertinent details. Self-report studies are inherently biased by the person's feelings at the time they filled out the questionnaire. If a person feels bad at the time they fill out the questionnaire, for example, their answers will be more negative. If the person feels good at the time, then the answers will be more positive.

As with all studies relying on voluntary participation, results can be biased by a lack of respondents, if there are systematic differences between people who respond and people who do not. Care must be taken to avoid biases due to interviewers and their demand characteristics.

See also

Related Research Articles

<span class="mw-page-title-main">Interview</span> Structured series of questions and answers

An interview is a structured conversation where one participant asks questions, and the other provides answers. In common parlance, the word "interview" refers to a one-on-one conversation between an interviewer and an interviewee. The interviewer asks questions to which the interviewee responds, usually providing information. That information may be used or provided to other audiences immediately or later. This feature is common to many types of interviews – a job interview or interview with a witness to an event may have no other audience present at the time, but the answers will be later provided to others in the employment or investigative process. An interview may also transfer information in both directions.

Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.

Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

<span class="mw-page-title-main">Likert scale</span> Psychometric measurement scale

A Likert scale is a psychometric scale named after its inventor, American social psychologist Rensis Likert, which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.

<span class="mw-page-title-main">Personality test</span> Method of assessing human personality constructs

A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.

<span class="mw-page-title-main">Questionnaire</span> Series of questions for gathering information

A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.

<span class="mw-page-title-main">Response bias</span> Type of bias

Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.

In psychology, ipsative questionnaires are those where the sum of scale scores from each respondent adds to a constant value. Sometimes called a forced-choice scale, this measure contrasts Likert-type scales in which respondents score—often from 1 to 5—how much they agree with a given statement.

Computer-assisted personal interviewing (CAPI) is an interviewing technique in which the respondent or interviewer uses an electronic device to answer the questions. It is similar to computer-assisted telephone interviewing, except that the interview takes place in person instead of over the telephone. This method is usually preferred over a telephone interview when the questionnaire is long and complex. It has been classified as a personal interviewing technique because an interviewer is usually present to serve as a host and to guide the respondent. If no interviewer is present, the term Computer-Assisted Self Interviewing (CASI) may be used. An example of a situation in which CAPI is used as the method of data collection is the British Crime Survey.

Personality Assessment Inventory (PAI), developed by Leslie Morey, is a self-report 344-item personality test that assesses a respondent's personality and psychopathology. Each item is a statement about the respondent that the respondent rates with a 4-point scale. It is used in various contexts, including psychotherapy, crisis/evaluation, forensic, personnel selection, pain/medical, and child custody assessment. The test construction strategy for the PAI was primarily deductive and rational. It shows good convergent validity with other personality tests, such as the Minnesota Multiphasic Personality Inventory and the Revised NEO Personality Inventory.

In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.

A rating scale is a set of categories designed to elicit information about a quantitative or a qualitative attribute. In the social sciences, particularly psychology, common examples are the Likert response scale and 1-10 rating scales in which a person selects the number that is considered to reflect the perceived quality of a product.

Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.

Computer-assisted web interviewing (CAWI) is an Internet surveying technique in which the interviewee follows a script provided in a website. The questionnaires are made in a program for creating web interviews. The program allows for the questionnaire to contain pictures, audio and video clips, links to different web pages, etc. The website is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is considered to be a cheaper way of surveying since one doesn't need to use people to hold surveys unlike computer-assisted telephone interviewing. With the increasing use of the Internet, online questionnaires have become a popular way of collecting information. The design of an online questionnaire has a dramatic effect on the quality of data gathered. There are many factors in designing an online questionnaire; guidelines, available question formats, administration, quality and ethic issues should be reviewed. Online questionnaires should be seen as a sub-set of a wider-range of online research methods.

<span class="mw-page-title-main">Unstructured interview</span> Interview in which questions are not prearranged.

An unstructured interview or non-directive interview is an interview in which questions are not prearranged. These non-directive interviews are considered to be the opposite of a structured interview which offers a set amount of standardized questions. The form of the unstructured interview varies widely, with some questions being prepared in advance in relation to a topic that the researcher or interviewer wishes to cover. They tend to be more informal and free flowing than a structured interview, much like an everyday conversation. Probing is seen to be the part of the research process that differentiates the in-depth, unstructured interview from an everyday conversation. This nature of conversation allows for spontaneity and for questions to develop during the course of the interview, which are based on the interviewees' responses. The chief feature of the unstructured interview is the idea of probe questions that are designed to be as open as possible. It is a qualitative research method and accordingly prioritizes validity and the depth of the interviewees' answers. One of the potential drawbacks is the loss of reliability, thereby making it more difficult to draw patterns among interviewees' responses in comparison to structured interviews. Unstructured interviews are used in a variety of fields and circumstances, ranging from research in social sciences, such as sociology, to college and job interviews. Fontana and Frey have identified three types of in depth, ethnographic, unstructured interviews - oral history, creative interviews, and post-modern interviews.

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes, responses to one mode may be significantly and substantially different from responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes of collection.

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys are increasingly replaced by web surveys. In addition, remote interviewers could possibly keep the respondent engaged while reducing cost as compared to in-person interviewers.

The Levenson Self-Report Psychopathy scale (LSRP) is a 26-item, 4-point Likert scale, self-report inventory to measure primary and secondary psychopathy in non-institutionalised populations. It was developed in 1995 by Michael R. Levenson, Kent A. Kiehl and Cory M. Fitzpatrick. The scale was created for the purpose of conducting a psychological study examining antisocial disposition among a sample of 487 undergraduate students attending psychology classes at the University of California, Davis.

<span class="mw-page-title-main">Interview (research)</span> Research technique

An interview in qualitative research is a conversation where questions are asked to elicit information. The interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers. They can be contrasted with focus groups in which an interviewer questions a group of people and observes the resulting conversation between interviewees, or surveys which are more anonymous and limit respondents to a range of predetermined answer choices. In addition, there are special considerations when interviewing children. In phenomenological or ethnographic research, interviews are used to uncover the meanings of central themes in the life world of the subjects from their own point of view.

References

  1. Victor Jupp, ed. (2006). "Self-Report Study". The SAGE Dictionary of Social Research Methods. doi:10.4135/9780857020116. ISBN   9780761962984.
  2. Althubaiti, Alaa (2016). "Information bias in health research: definition, pitfalls, and adjustment methods".
  3. Orkin, Stuart; Fisher, David; Look, Thomas; Lux, Samuel; Ginsburg, David; Nathan, David (2009). Oncology of Infancy and Childhood E-Book. Philadelphia, PA: Elsevier Health Sciences. p. 1258. ISBN   9781416034315.
  4. Robins, Richard; Fraley, Chris; Krueger, Robert (2007). Handbook of Research Methods in Personality Psychology . The Guilford Press. pp.  228. ISBN   9781593851118.
  5. 1 2 Heppner, P. Paul; Wampold, Bruce; Owen, Jesse; Thompson, Mindi; Wang, Kenneth (2016). Research Design in Counseling. Boston, MA: Cengage Learning. p. 334. ISBN   9781305087316.
  6. John Garcia; Andrew R. Gustavson (January 1997). "The Science of Self-Report". Aps Observer. 10.
  7. Northrup, David A. (Fall 1996). "The Problem of the Self-Report In Survey Research". 11 (3). Institute for Social Research.{{cite journal}}: Cite journal requires |journal= (help)