Theoretical sampling is a process of data collection for generating theory whereby the analyst jointly collects codes and analyses data and decides what data to collect next and where to find them, in order to develop a theory as it emerges. [1] The initial stage of data collection depends largely on a general subject or problem area, which is based on the analyst's general perspective of the subject area. The initial decisions are not based on a preconceived theoretical framework. [2] The researcher begins by identifying some key concepts and features which they will research about. This gives a foundation for the research. A researcher must be theoretically sensitive so that a theory can be conceptualized and formulated as it emerges from the data being collected. [3] Caution must be taken so as to not limit oneself to specific aspects of a theory; this will make a researcher blind towards other concepts and aspects of the theory. The main question in this method of sampling is this: what groups should the researcher turn to next in the data collection process, and why?
According to Chenitz and Swanson (1986), theoretical sampling emerged with the foundation of grounded theory, which was first developed by Glaser and Strauss in 1967. Grounded theory can be described as a research approach for the collection and analysis of qualitative data for the purpose of generating explanatory theory, in order to understand various social and psychological phenomena. Its focus is to develop a theory from continuous comparative analysis of data collected by theoretical sampling. [4]
The main advantage of theoretical sampling is that it strengthens the rigour of the study if the study attempts to generate the theory in the research area. The application of theoretical sampling provides a structure to data collection as well as data analysis. It is based on the need to collect more data to examine categories and their relationships and assures that representativeness exists in the category. [5] Theoretical sampling has inductive as well as deductive characteristics. [6] It is very flexible as the researcher can make shifts in plans and emphasize early in the research process so that the data gathered reflects what is occurring in the field. [7]
Certain disadvantages may be associated with this sampling method. It is a highly systematic method due to which application of theoretical sampling requires more resources like time and money as compared to other sampling methods. [8] It is a very complicated method and not easy to understand. To achieve depth in developing the categories researcher proceeds to another location to increase breadth in the category which sounds very complex and indeed is not helpful for the novice and may be problematic. [9]
While discussing theoretical sampling, there are three features that must be considered:
1. Choosing cases in terms of the theory
In this feature, the basis is constructed on an ideal universe or a wider universe where there is a larger comprehension or social explanation according to which the researcher is able to construct their theory. This theoretical universe will allow for better-formulated samples which are more meaningful and sensible than others. This kind of sample will also be a wider representative sample. So in this type of sampling, we select samples that have a particular process, examples, categories and even types that are relevant to the ideal or wider universe. One of the most commonly given example is of discourse analysis of gender. The sample relevant units in qualitative research are very often viewed as theoretically defined. This basically means that attributes such as gender, ethnicity and even age can't be the basis for a sample. This is due to the fact that most often attributions are themselves the topic of the research. [10]
2. Choosing deviant cases
One of the leading philosophies in theoretical sampling is the fact that the researcher doesn't choose cases that are supportive to their argument. In theoretical sampling the belief is that researchers need to overcome the tendency to select cases and instances that support their side of the argument. Instead it states that it would be more beneficial to look out for negative instances and cases which are defined by the theory that we are working with. This feature basically states that a researcher should not exclude any fact from the process of research just because it seems impossible. Researchers should insist on the fact that if they imagine it then it can't be impossible. [11]
3. Changing the size of your sample during the course of the research
The first two features of a theoretical sample deals with issues right at the beginning of the research project. The third feature however deals with concerns or application during the process of the research. One of the advantages the qualitative research as a whole has over quantitative research is its flexibility. In theoretical sampling the researcher manipulates or changes the theory, sampling activities as well as the analysis during the course of the research. Flexibility occurs in this style of sampling when the researchers want to increase the sample size due to new factors that arise during the research. Flexibility also occurs when the researcher's wishes to use a small sample during the initial stages of the research but increase the sample size to test developing generalizations. Finally flexibility is also allowed when the researcher finds unexpected generalization and wants to look into deviant cases. [12]
In theoretical sampling, there are two main criteria for initial data collection, general sociological perspective and problem area. Collection criteria for the future cannot be planned in advance as the criterion emerges as the theory evolves.
Which groups are included? To study this often multiple comparison groups are used. The groups are chosen based on the theoretical criteria or relevance. Sociologists or researchers often evade the problem by studying only one group and trying to describe the subgroups. Often the differences among the groups or sub groups are just stated but a theoretical analysis is not conducted. One of the advantages here is that the analyst has the liberty to adjust their control of the data collection, to ensure that the data is relevant to the emerging theory. Also, usually groups are chosen only for a single comparison, therefore there is usually no pre-planned or definite set of groups for all the categories. Another interesting fact is that it is almost impossible to cite the number of groups and the type of groups until the research is completed. One of the major differences with comparative analysis is that comparative analysis focus on the verification and description using accurate evidence. [13]
Why are groups selected? Comparing groups gives the researcher the advantage of development of variety of categories. The main criterion is that the data collected should apply to a particular category or property, irrespective of the differences or similarities. The researcher's main focus is to keep the purpose of the research clear. As the researcher compares groups, he gains control over two scales of generality. They are: 1. Conceptual level 2. Population scope Also differences and similarities can either be maximised or minimised, depending on the type of groups being compared. This gives the researcher more control and helps them discover more categories. This then helps them to develop and relate to more theoretical properties which lead to enhancing the emerging theory. When the researcher minimises differences among groups, s/he is able to establish a definite set of conditions under which a category exists. Whereas on maximising, s/he is able to gather a variety of data with strategic similarities among the groups. Generally in theoretical sampling, the researcher aims at maximising differences as this brings about greater coverage in the variation among different aspects, making the theory more elaborate. [14]
How are the groups selected? The researcher should actively search for data that is theoretically relevant. Rather than focusing on the group, greater focus should be placed on the emerging theory. The larger the contrast between the groups, the greater will be the probability of evident comparison between the two. As the research progresses and the researcher studies the same group or different sub-groups, s/he then arrives at few categories, which on saturation generate their theory. [15]
Initially, theoretical sampling is used only for a pragmatic purpose of generating a theory. The ability to generate an extensive understanding of a completely well theory defined in any field through research takes in the account of theoretical sampling. It first focuses on the problem area and then into the various approaches that need the basis of grounded theory. For example, how confident men handle prospective marks or how policemen act toward people of African descent or what happens to students in medical school that turns them into doctors, is dependent on the theoretical framework that the researcher arrives with. [16] Theoretical sampling helps in exploring various hibernating research questions that are eventually evident in the data collection as a theory. According to Glaser and Holton (2004), Grounded theory that has a data collecting inclination towards theoretical sampling was first derived from qualitative sampling. Theoretical sampling methods are now considered as the diluted version of grounded theory that is now used in health care research where researcher may want to find out the different reasons for a particular illness to trigger in a particular kind of population. [17] According to Sandelowski in 1995, although theoretical sampling is often misconstrued as purposive sampling, the uses of theoretical sampling vary to a large extent. Also, the selection criteria of participants for theoretical sampling changes according to the needs and changes that occur in the theoretical study at the given time. Theoretical sampling is considered to be purpose driven and it explicitly carries out its function on the basis of an emerging theory. [18] The main focus of theoretical research is to use its development through a constant comparative analysis of data that is gained through theoretical sampling for a better understanding of the theory produced. [19]
The concept of saturation was first defined in the context of grounded theory as theoretical saturation. In qualitative research the word saturation is extensively used almost interchangeably with data saturation, thematic saturation, theoretical saturation and conceptual saturation. Saturation can be simply defined as data satisfaction. It is when the researcher reaches a point where no new information is obtained from further data.
Saturation point determines the sample size in qualitative research as it indicates that adequate data has been collected for a detailed analysis. However, there are no fixed sizes or standard tests that determine the required data for reaching saturation. For example, in many phenomenographic studies, theoretical saturation is often reached after 15 to 30 participants, [20] whereas other methods may require far fewer, or greater, numbers.
An example of theoretical sampling is best described by Glaser and Strauss in the 1960s. It is a memo from their research for “Awareness of Dying”. It explains how the search of data is active throughout the research process as the researcher keeps probing into other relevant theoretical questions- “Visits to the various medical services were scheduled as follows: I wished first to look at services that minimized patient awareness (and so first looked at premature baby service and then a neurosurgical service where patients were frequently comatose). I wished next to look at dying in a situation where expectancy of the staff was great and dying was quick, so I observed on an Intensive Care Unit. Then I wished to observe on a service where staff expectations of terminality were great but where patients might or might not be, and where dying tended to be slow. So I looked next at a cancer service. I wished then to look at conditions where death was unexpected and rapid, and so looked at an emergency service. While we were looking at some different types of services, we also observed the above types of serviced at other types of hospitals. So, our scheduling of types of services was directed by a general conceptual scheme- which included hypotheses about awareness, expectedness and rate of dying- as well as by a developing conceptual structure including matters not at first envisioned. Sometimes we returned to services after the initial two or three or four weeks of continuous observation, in order to check upon items which needed checking or had been missed in the initial period." [21]
Sampling is the use of a subset of the population to represent the whole population or to inform about (social) processes that are meaningful beyond the particular cases, individuals or sites studied. Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular sample may be calculated. In cases where external validity is not of critical importance to the study's goals or purpose, researchers might prefer to use nonprobability sampling. Nonprobability sampling does not meet this criterion. Nonprobability sampling techniques are not intended to be used to infer from the sample to the general population in statistical terms. Instead, for example, grounded theory can be produced through iterative nonprobability sampling until theoretical saturation is reached.
A case study is an in-depth, detailed examination of a particular case within a real-world context. For example, case studies in medicine may focus on an individual patient or ailment; case studies in business might cover a particular firm's strategy or a broader market; similarly, case studies in politics can range from a narrow happening over time to an enormous undertaking.
Qualitative psychological research is psychological research that employs qualitative methods.
Participant observation is one type of data collection method by practitioner-scholars typically used in qualitative research and ethnography. This type of methodology is employed in many disciplines, particularly anthropology, sociology, communication studies, human geography, and social psychology. Its aim is to gain a close and intimate familiarity with a given group of individuals and their practices through an intensive involvement with people in their cultural environment, usually over an extended period of time.
Qualitative research relies on data obtained by the researcher from first-hand observation, interviews, questionnaires, focus groups, participant-observation, recordings made in natural settings, documents, case studies, and artifacts. The data are generally nonnumerical. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, social work, and educational research. Qualitative researchers study individuals' understanding of their social reality.
Social research is a research conducted by social scientists following a systematic plan. Social research methodologies can be classified as quantitative and qualitative.
Anselm Leonard Strauss was an American sociologist professor at the University of California, San Francisco (UCSF) internationally known as a medical sociologist and as the developer of grounded theory, an innovative method of qualitative analysis widely used in sociology, nursing, education, social work, and organizational studies. He also wrote extensively on Chicago sociology/symbolic interactionism, sociology of work, social worlds/arenas theory, social psychology and urban imagery. He published over 30 books, chapters in over 30 other books, and over 70 journal articles.
In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal. In the context of research, this goal is usually to discover new knowledge or to verify pre-existing knowledge claims. This normally involves various steps, like choosing a sample, collecting data from this sample, and interpreting this data. The study of methods involves a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods to assess their advantages and disadvantages relative to different research goals and situations. This way, a methodology can help make the research process efficient and reliable by guiding researchers on which method to employ at each step. These descriptions and evaluations of methods often depend on philosophical background assumptions. The assumptions are about issues like how the studied phenomena are to be conceptualized, what constitutes evidence for or against them, and what the general goal of research is. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.
Grounded theory is a systematic methodology that has been largely applied to qualitative research conducted by social scientists. The methodology involves the construction of hypotheses and theories through the collecting and analysis of data. Grounded theory involves the application of inductive reasoning. The methodology contrasts with the hypothetico-deductive model used in traditional scientific research.
Exploratory research is "the preliminary research to clarify the exact nature of the problem to be solved." It is used to ensure additional research is taken into consideration during an experiment as well as determining research priorities, collecting data and honing in on certain subjects which may be difficult to take note of without exploratory research. It can include techniques, such as:
The Discovery of Grounded Theory is a 1967 book (ISBN 0-202-30260-1) by Barney Glaser and Anselm Strauss on grounded theory.
Research design refers to the overall strategy utilized to carry out research that defines a succinct and logical plan to tackle established research question(s) through the collection, interpretation, analysis, and discussion of data.
Axial coding is the breaking down of core themes during qualitative data analysis. Axial coding in grounded theory is the process of relating codes to each other, via a combination of inductive and deductive thinking. The basic framework of generic relationships is understood, according to Strauss and Corbin who propose the use of a "coding paradigm", to include categories related to (1) the phenomenon under study, (2) the conditions related to that phenomenon, (3) the actions and interactional strategies directed at managing or handling the phenomenon and (4) the consequences of the actions/interactions related to the phenomenon. As Kelle underlines, the implicit or explicit theoretical framework necessary to identify categories in empirical data is derived, in the procedures explicated by Strauss and Corbin (1990), from a "general model of action rooted in pragmatist and interactionist social theory". This model or theoretical framework underlines the importance of "analysing and modelling action and interaction strategies of the actors". Axial coding is a cornerstone of Strauss and Corbin's approach but is regarded by Charmaz (2006) as highly structured and optional.
ATLAS.ti is a computer-assisted qualitative data analysis software that facilitates analysis of qualitative data for qualitative research, quantitative research, and mixed methods research.
Interpretative phenomenological analysis (IPA) is a qualitative form of psychology research. IPA has an idiographic focus, which means that instead of producing generalization findings, it aims to offer insights into how a given person, in a given context, makes sense of a given situation. Usually, these situations are of personal significance; examples might include a major life event, or the development of an important relationship. IPA has its theoretical origins in phenomenology and hermeneutics, and many of its key ideas are inspired by the work of Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty. IPA's tendency to combine psychological, interpretative, and idiographic elements is what distinguishes it from other approaches to qualitative, phenomenological psychology.
In the social sciences, coding is an analytical process in which data, in both quantitative form or qualitative form are categorized to facilitate analysis.
Phenomenography is a qualitative research methodology, within the interpretivist paradigm, that investigates the qualitatively different ways in which people experience something or think about something. It is an approach to educational research which appeared in publications in the early 1980s. It initially emerged from an empirical rather than a theoretical or philosophical basis.
Thematic analysis is one of the most common forms of analysis within qualitative research. It emphasizes identifying, analysing and interpreting patterns of meaning within qualitative data. Thematic analysis is often understood as a method or technique in contrast to most other qualitative analytic approaches - such as grounded theory, discourse analysis, narrative analysis and interpretative phenomenological analysis - which can be described as methodologies or theoretically informed frameworks for research. Thematic analysis is best thought of as an umbrella term for a variety of different approaches, rather than a singular method. Different versions of thematic analysis are underpinned by different philosophical and conceptual assumptions and are divergent in terms of procedure. Leading thematic analysis proponents, psychologists Virginia Braun and Victoria Clarke distinguish between three main types of thematic analysis: coding reliability approaches, code book approaches and reflexive approaches. They describe their own widely used approach first outlined in 2006 in the journal Qualitative Research in Psychology as reflexive thematic analysis. Their 2006 paper has over 120,000 Google Scholar citations and according to Google Scholar is the most cited academic paper published in 2006. The popularity of this paper exemplifies the growing interest in thematic analysis as a distinct method.
Biographical research is a qualitative research approach aligned to the social interpretive paradigm of research. The biographical research is concerned with the reconstruction of life histories and the constitution of meaning based on biographical narratives and documents. The material for analysis consists of interview protocols (memorandums), video recordings, photographs, and a diversity of sources. These documents are evaluated and interpreted according to specific rules and criteria. The starting point for this approach is the understanding of an individual biography in terms of its social constitution. The biographical approach was influenced by the symbolic interactionism, the phenomenological sociology of knowledge, and ethnomethodology. Therefore, biography is understood in terms of a social construct and the reconstruction of biographies can give insight on social processes and figurations, thus helping to bridge the gap between micro-, meso-, and macro- levels of analysis. The biographical approach is particularly important in German sociology. This approach is used in the Social Sciences as well as in Pedagogy and other disciplines. The Research Committee 38 "Biography and Society" of the International Sociological Association (ISA) was created in 1984 and is dedicated "to help develop a better understanding of the relations between individual lives, the social structures and historical processes within which they take shape and which they contribute to shape, and the individual accounts of biographical experience ".
Kathleen Marian Charmaz was the developer of Constructivist Grounded Theory, a major research method in qualitative research internationally and across many disciplines and professions. She was professor emerita of Sociology at Sonoma State University, Rohnert Park, California, and former Director of its Faculty Writing Program. Charmaz’s background was in occupational therapy and sociology. Charmaz’s areas of expertise included grounded theory, symbolic interactionism, chronicity, death and dying, qualitative health research, scholarly writing, sociological theory, social psychology, research methods, health and medicine, aging, sociology of emotions, and the body.