Content analysis

Last updated

Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic manner. [1] One of the key advantages of using content analysis to analyse social phenomena is their non-invasive nature, in contrast to simulating social experiences or collecting survey answers.

Contents

Practices and philosophies of content analysis vary between academic disciplines. They all involve systematic reading or observation of texts or artifacts which are assigned labels (sometimes called codes) to indicate the presence of interesting, meaningful pieces of content. [2] [3] By systematically labeling the content of a set of texts, researchers can analyse patterns of content quantitatively using statistical methods, or use qualitative methods to analyse meanings of content within texts.

Computers are increasingly used in content analysis to automate the labeling (or coding) of documents. Simple computational techniques can provide descriptive data such as word frequencies and document lengths. Machine learning classifiers can greatly increase the number of texts that can be labeled, but the scientific utility of doing so is a matter of debate. Further, numerous computer-aided text analysis (CATA) computer programs are available that analyze text for pre-determined linguistic, semantic, and psychological characteristics. [4]

Goals

Content analysis is best understood as a broad family of techniques. Effective researchers choose techniques that best help them answer their substantive questions. That said, according to Klaus Krippendorff, six questions must be addressed in every content analysis: [5]

  1. Which data are analyzed?
  2. How are the data defined?
  3. From what population are data drawn?
  4. What is the relevant context?
  5. What are the boundaries of the analysis?
  6. What is to be measured?

The simplest and most objective form of content analysis considers unambiguous characteristics of the text such as word frequencies, the page area taken by a newspaper column, or the duration of a radio or television program. Analysis of simple word frequencies is limited because the meaning of a word depends on surrounding text. Key Word In Context (KWIC) routines address this by placing words in their textual context. This helps resolve ambiguities such as those introduced by synonyms and homonyms.

A further step in analysis is the distinction between dictionary-based (quantitative) approaches and qualitative approaches. Dictionary-based approaches set up a list of categories derived from the frequency list of words and control the distribution of words and their respective categories over the texts. While methods in quantitative content analysis in this way transform observations of found categories into quantitative statistical data, the qualitative content analysis focuses more on the intentionality and its implications. There are strong parallels between qualitative content analysis and thematic analysis. [6]

Qualitative and quantitative content analysis

Quantitative content analysis highlights frequency counts and statistical analysis of these coded frequencies. [7] Additionally, quantitative content analysis begins with a framed hypothesis with coding decided on before the analysis begins. These coding categories are strictly relevant to the researcher's hypothesis. Quantitative analysis also takes a deductive approach. [8] Examples of content-analytical variables and constructs can be found, for example, in the open-access database DOCA. This database compiles, systematizes, and evaluates relevant content-analytical variables of communication and political science research areas and topics.

Siegfried Kracauer provides a critique of quantitative analysis, asserting that it oversimplifies complex communications in order to be more reliable. On the other hand, qualitative analysis deals with the intricacies of latent interpretations, whereas quantitative has a focus on manifest meanings. He also acknowledges an "overlap" of qualitative and quantitative content analysis. [7] Patterns are looked at more closely in qualitative analysis, and based on the latent meanings that the researcher may find, the course of the research could be changed. It is inductive and begins with open research questions, as opposed to a hypothesis. [8]

Codebooks

The data collection instrument used in content analysis is the codebook or coding scheme. In qualitative content analysis the codebook is constructed and improved during coding, while in quantitative content analysis the codebook needs to be developed and pretested for reliability and validity before coding. [4] The codebook includes detailed instructions for human coders plus clear definitions of the respective concepts or variables to be coded plus the assigned values.

According to current standards of good scientific practice, each content analysis study should provide their codebook in the appendix or as supplementary material so that reproducibility of the study is ensured. On the Open Science Framework (OSF) server of the Center for Open Science a lot of codebooks of content analysis studies are freely available via search for "codebook".

Furthermore, the Database of Variables for Content Analysis (DOCA) provides an open access archive of pretested variables and established codebooks for content analyses. [9] Measures from the archive can be adopted in future studies to ensure the use of high-quality and comparable instruments. DOCA covers, among others, measures for the content analysis of fictional media and entertainment (e.g., measures for sexualization in video games [10] ), of user-generated media content (e.g., measures for online hate speech [11] ), and of news media and journalism (e.g., measures for stock photo use in press reporting on child sexual abuse, [12] and measures of personalization in election campaign coverage [13] ).

Computational tools

With the rise of common computing facilities like PCs, computer-based methods of analysis are growing in popularity. [14] [15] [16] Answers to open ended questions, newspaper articles, political party manifestos, medical records or systematic observations in experiments can all be subject to systematic analysis of textual data.

By having contents of communication available in form of machine readable texts, the input is analyzed for frequencies and coded into categories for building up inferences.

Computer-assisted analysis can help with large, electronic data sets by cutting out time and eliminating the need for multiple human coders to establish inter-coder reliability. However, human coders can still be employed for content analysis, as they are often more able to pick out nuanced and latent meanings in text. A study found that human coders were able to evaluate a broader range and make inferences based on latent meanings. [17]

Reliability and Validity

Robert Weber notes: "To make valid inferences from the text, it is important that the classification procedure be reliable in the sense of being consistent: Different people should code the same text in the same way". [18] The validity, inter-coder reliability and intra-coder reliability are subject to intense methodological research efforts over long years. [5] Neuendorf suggests that when human coders are used in content analysis at least two independent coders should be used. Reliability of human coding is often measured using a statistical measure of inter-coder reliability or "the amount of agreement or correspondence among two or more coders". [4] Lacy and Riffe identify the measurement of inter-coder reliability as a strength of quantitative content analysis, arguing that, if content analysts do not measure inter-coder reliability, their data are no more reliable than the subjective impressions of a single reader. [19]

According to today's reporting standards, quantitative content analyses should be published with complete codebooks and for all variables or measures in the codebook the appropriate inter-coder or inter-rater reliability coefficients should be reported based on empirical pre-tests. [4] [20] [21] Furthermore, the validity of all variables or measures in the codebook must be ensured. This can be achieved through the use of established measures that have proven their validity in earlier studies. Also, the content validity of the measures can be checked by experts from the field who scrutinize and then approve or correct coding instructions, definitions and examples in the codebook.

Kinds of text

There are five types of texts in content analysis:

  1. written text, such as books and papers
  2. oral text, such as speech and theatrical performance
  3. iconic text, such as drawings, paintings, and icons
  4. audio-visual text, such as TV programs, movies, and videos
  5. hypertexts, which are texts found on the Internet

History

Content analysis is research using the categorization and classification of speech, written text, interviews, images, or other forms of communication. In its beginnings, using the first newspapers at the end of the 19th century, analysis was done manually by measuring the number of columns given a subject. The approach can also be traced back to a university student studying patterns in Shakespeare's literature in 1893. [22]

Over the years, content analysis has been applied to a variety of scopes. Hermeneutics and philology have long used content analysis to interpret sacred and profane texts and, in many cases, to attribute texts' authorship and authenticity. [3] [5]

In recent times, particularly with the advent of mass communication, content analysis has known an increasing use to deeply analyze and understand media content and media logic. The political scientist Harold Lasswell formulated the core questions of content analysis in its early-mid 20th-century mainstream version: "Who says what, to whom, why, to what extent and with what effect?". [23] The strong emphasis for a quantitative approach started up by Lasswell was finally carried out by another "father" of content analysis, Bernard Berelson, who proposed a definition of content analysis which, from this point of view, is emblematic: "a research technique for the objective, systematic and quantitative description of the manifest content of communication". [24]

Quantitative content analysis has enjoyed a renewed popularity in recent years thanks to technological advances and fruitful application in of mass communication and personal communication research. Content analysis of textual big data produced by new media, particularly social media and mobile devices has become popular. These approaches take a simplified view of language that ignores the complexity of semiosis, the process by which meaning is formed out of language. Quantitative content analysts have been criticized for limiting the scope of content analysis to simple counting, and for applying the measurement methodologies of the natural sciences without reflecting critically on their appropriateness to social science. [25] Conversely, qualitative content analysts have been criticized for being insufficiently systematic and too impressionistic. [25] Krippendorff argues that quantitative and qualitative approaches to content analysis tend to overlap, and that there can be no generalisable conclusion as to which approach is superior. [25]

Content analysis can also be described as studying traces, which are documents from past times, and artifacts, which are non-linguistic documents. Texts are understood to be produced by communication processes in a broad sense of that phrase—often gaining mean through abduction. [3] [26]

Latent and manifest content

Manifest content is readily understandable at its face value. Its meaning is direct. Latent content is not as overt, and requires interpretation to uncover the meaning or implication. [27]

Uses

Holsti groups fifteen uses of content analysis into three basic categories: [28]

He also places these uses into the context of the basic communication paradigm.

The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.

Uses of Content Analysis by Purpose, Communication Element, and Question
PurposeElementQuestionUse
Make inferences about the antecedents of communicationsSourceWho?
Encoding processWhy?
  • Secure political & military intelligence
  • Analyse traits of individuals
  • Infer cultural aspects & change
  • Provide legal & evaluative evidence
Describe & make inferences about the characteristics of communications Channel How?
  • Analyse techniques of persuasion
  • Analyse style
MessageWhat?
  • Describe trends in communication content
  • Relate known characteristics of sources to messages they produce
  • Compare communication content to standards
RecipientTo whom?
  • Relate known characteristics of audiences to messages produced for them
  • Describe patterns of communication
Make inferences about the consequences of communicationsDecoding processWith what effect?
Note. Purpose, communication element, & question from Holsti. [28] Uses primarily from Berelson [29] as adapted by Holsti. [28]

As a counterpoint, there are limits to the scope of use for the procedures that characterize content analysis. In particular, if access to the goal of analysis can be obtained by direct means without material interference, then direct measurement techniques yield better data. [30] Thus, while content analysis attempts to quantifiably describe communications whose features are primarily categorical——limited usually to a nominal or ordinal scale——via selected conceptual units (the unitization) which are assigned values (the categorization) for enumeration while monitoring intercoder reliability, if instead the target quantity manifestly is already directly measurable——typically on an interval or ratio scale——especially a continuous physical quantity, then such targets usually are not listed among those needing the "subjective" selections and formulations of content analysis. [31] [32] [33] [34] [35] [36] [20] [37] For example (from mixed research and clinical application), as medical images communicate diagnostic features to physicians, neuroimaging's stroke (infarct) volume scale called ASPECTS is unitized as 10 qualitatively delineated (unequal) brain regions in the middle cerebral artery territory, which it categorizes as being at least partly versus not at all infarcted in order to enumerate the latter, with published series often assessing intercoder reliability by Cohen's kappa. The foregoing italicized operations impose the uncredited form of content analysis onto an estimation of infarct extent, which instead is easily enough and more accurately measured as a volume directly on the images. [38] [39] ("Accuracy ... is the highest form of reliability." [40] ) The concomitant clinical assessment, however, by the National Institutes of Health Stroke Scale (NIHSS) or the modified Rankin Scale (mRS), retains the necessary form of content analysis. Recognizing potential limits of content analysis across the contents of language and images alike, Klaus Krippendorff affirms that "comprehen[sion] ... may ... not conform at all to the process of classification and/or counting by which most content analyses proceed," [41] suggesting that content analysis might materially distort a message.

The development of the initial coding scheme

The process of the initial coding scheme or approach to coding is contingent on the particular content analysis approach selected. Through a directed content analysis, the scholars draft a preliminary coding scheme from pre-existing theory or assumptions. While with the conventional content analysis approach, the initial coding scheme developed from the data.

The conventional process of coding

With either approach above, immersing oneself into the data to obtain an overall picture is recommendable for researchers to conduct. Furthermore, identifying a consistent and clear unit of coding is vital, and researchers' choices range from a single word to several paragraphs, from texts to iconic symbols. Last, constructing the relationships between codes by sorting out them within specific categories or themes. [42]

See also

Related Research Articles

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

A case study is an in-depth, detailed examination of a particular case within a real-world context. For example, case studies in medicine may focus on an individual patient or ailment; case studies in business might cover a particular firm's strategy or a broader market; similarly, case studies in politics can range from a narrow happening over time like the operations of a specific political campaign, to an enormous undertaking like world war, or more often the policy analysis of real-world problems affecting multiple stakeholders.

<span class="mw-page-title-main">Multimethodology</span>

Multimethodology or multimethod research includes the use of more than one method of data collection or research in a research study or set of related studies. Mixed methods research is more specific in that it includes the mixing of qualitative and quantitative data, methods, methodologies, and/or paradigms in a research study or set of related studies. One could argue that mixed methods research is a special case of multimethod research. Another applicable, but less often used label, for multi or mixed research is methodological pluralism. All of these approaches to professional and academic research emphasize that monomethod research can be improved through the use of multiple data sources, methods, research methodologies, perspectives, standpoints, and paradigms.

<span class="mw-page-title-main">Qualitative research</span> Form of research

Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research.

<span class="mw-page-title-main">Social research</span> Research conducted by social scientists

Social research is research conducted by social scientists following a systematic plan. Social research methodologies can be classified as quantitative and qualitative.

<span class="mw-page-title-main">Quantitative research</span> All procedures for the numerical representation of empirical facts

Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.

<span class="mw-page-title-main">Methodology</span> Study of research methods

In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims. This normally involves various steps, like choosing a sample, collecting data from this sample, and interpreting the data. The study of methods concerns a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods. This way, it is assessed what advantages and disadvantages they have and for what research goals they may be used. These descriptions and evaluations depend on philosophical background assumptions. Examples are how to conceptualize the studied phenomena and what constitutes evidence for or against them. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.

<span class="mw-page-title-main">Grounded theory</span> Qualitative research methodology

Grounded theory is a systematic methodology that has been largely applied to qualitative research conducted by social scientists. The methodology involves the construction of hypotheses and theories through the collecting and analysis of data. Grounded theory involves the application of inductive reasoning. The methodology contrasts with the hypothetico-deductive model used in traditional scientific research.

<span class="mw-page-title-main">Narrative inquiry</span> Discipline within qualitative research

Narrative inquiry or narrative analysis emerged as a discipline from within the broader field of qualitative research in the early 20th century, as evidence exists that this method was used in psychology and sociology. Narrative inquiry uses field texts, such as stories, autobiography, journals, field notes, letters, conversations, interviews, family stories, photos, and life experience, as the units of analysis to research and understand the way people create meaning in their lives as narratives.

In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

Designing Social Inquiry: Scientific Inference in Qualitative Research is an influential 1994 book written by Gary King, Robert Keohane, and Sidney Verba that lays out guidelines for conducting qualitative research. The central thesis of the book is that qualitative and quantitative research share the same "logic of inference." The book primarily applies lessons from regression-oriented analysis to qualitative research, arguing that the same logics of causal inference can be used in both types of research.

In the social sciences, coding is an analytical process in which data, in both quantitative form or qualitative form are categorized to facilitate analysis.

Klaus Krippendorff was a communication scholar, social science methodologist, and cyberneticist. and was the Gregory Bateson professor for Cybernetics, Language, and Culture at the University of Pennsylvania's Annenberg School for Communication. He wrote an influential textbook on content analysis and is the creator of the widely used and eponymous measure of interrater reliability, Krippendorff's alpha. In 1984–1985, he served as the president of the International Communication Association, one of the two largest professional associations for scholars of communication.

In statistics, qualitative comparative analysis (QCA) is a data analysis based on set theory to examine the relationship of conditions to outcome. QCA describes the relationship in terms of necessary conditions and sufficient conditions. The technique was originally developed by Charles Ragin in 1987 to study data sets that are too small for linear regression analysis but large for cross-case analysis.

<span class="mw-page-title-main">RQDA</span> Qualitative data analysis tool

RQDA is an R package for computer assisted qualitative data analysis or CAQDAS. It is installable from, and runs within, the R statistical software, but has a separate window running a graphical user interface. RQDA's approach allows for tight integration of the constructivist approach of qualitative research with quantitative data analysis which can increase the rigor, transparency and validity of qualitative research.

Krippendorff's alpha coefficient, named after academic Klaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, alpha has been used in content analysis where textual units are categorized by trained readers, in counseling and survey research where experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or in observational studies where unstructured happenings are recorded for subsequent analysis.

Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.

Thematic analysis is one of the most common forms of analysis within qualitative research. It emphasizes identifying, analysing and interpreting patterns of meaning within qualitative data. Thematic analysis is often understood as a method or technique in contrast to most other qualitative analytic approaches – such as grounded theory, discourse analysis, narrative analysis and interpretative phenomenological analysis – which can be described as methodologies or theoretically informed frameworks for research. Thematic analysis is best thought of as an umbrella term for a variety of different approaches, rather than a singular method. Different versions of thematic analysis are underpinned by different philosophical and conceptual assumptions and are divergent in terms of procedure. Leading thematic analysis proponents, psychologists Virginia Braun and Victoria Clarke distinguish between three main types of thematic analysis: coding reliability approaches, code book approaches and reflexive approaches. They describe their own widely used approach first outlined in 2006 in the journal Qualitative Research in Psychology as reflexive thematic analysis. Their 2006 paper has over 120,000 Google Scholar citations and according to Google Scholar is the most cited academic paper published in 2006. The popularity of this paper exemplifies the growing interest in thematic analysis as a distinct method.

Online content analysis or online textual analysis refers to a collection of research techniques used to describe and make inferences about online material through systematic coding and interpretation. Online content analysis is a form of content analysis for analysis of Internet-based communication.

Public policy research is a multidisciplinary field that delves into the systematic examination and comprehensive analysis of policy matters and their far-reaching implications on society as a whole. The field explores diverse facets of public policy including political and administrative systems, institutions, actors, norms and traditions, communication and knowledge practices and the conception, execution and evaluation of policy decisions. Public policy research and policy analysis is conducted in multiple sectors including academic institutions, think tanks, consulting firms, not for profit organisations and government agencies. It is a major subfield of political science but is also a subfield in many other areas including public health and political economy. Research involves consideration of the interplay between various stakeholders, including policymakers, interest groups, and the general public, as well as an examination of the societal, economic, and political factors that shape policy decision-making processes. Public policy researchers explore the complexities of policy formulation with the aim to contribute both to understanding and improving the policy process overall, and to enhancing public policy effectiveness and societal well-being in specific policy arenas.

References

  1. Bryman, Alan; Bell, Emma (2011). Business research methods (3rd ed.). Cambridge: Oxford University Press. ISBN   9780199583409. OCLC   746155102.
  2. Hodder, I. (1994). The interpretation of documents and material culture. Thousand Oaks etc.: Sage. p. 155. ISBN   978-0761926870.
  3. 1 2 3 Tipaldo, G. (2014). L'analisi del contenuto e i mass media. Bologna, IT: Il Mulino. p. 42. ISBN   978-88-15-24832-9.
  4. 1 2 3 4 Kimberly A. Neuendorf (30 May 2016). The Content Analysis Guidebook. SAGE. ISBN   978-1-4129-7947-4.
  5. 1 2 3 Krippendorff, Klaus (2004). Content Analysis: An Introduction to Its Methodology (2nd ed.). Thousand Oaks, CA: Sage. p. 413. ISBN   9780761915454.
  6. Vaismoradi, Mojtaba; Turunen, Hannele; Bondas, Terese (2013-09-01). "Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study". Nursing & Health Sciences. 15 (3): 398–405. doi: 10.1111/nhs.12048 . ISSN   1442-2018. PMID   23480423. S2CID   10881485.
  7. 1 2 Kracauer, Siegfried (1952). "The Challenge of Qualitative Content Analysis". Public Opinion Quarterly. 16 (4, Special Issue on International Communications Research): 631. doi:10.1086/266427. ISSN   0033-362X.
  8. 1 2 White, Marilyn Domas; Marsh, Emily E. (2006). "Content Analysis: A Flexible Methodology". Library Trends. 55 (1): 22–45. doi:10.1353/lib.2006.0053. hdl: 2142/3670 . ISSN   1559-0682. S2CID   6342233.
  9. Oehmer-Pedrazzi, Franziska; Kessler, Sabrina; Humprecht, Edda; Sommer, Katharina; Castro Herrero, Laia (2022). "DOCA - Database of Categories for Content Analysis". ISSN   2673-8597.
  10. Wulf, Tim; Possler, Daniel; Breuer, Johannes (2021). "Sexualization (Video Games)". DOCA - Database of Variables for Content Analysis. doi: 10.34778/3e . ISSN   2673-8597. S2CID   233683109.
  11. Esau, Katharina (2021). "Hate speech (Hate Speech/Incivility)". DOCA - Database of Variables for Content Analysis. doi: 10.34778/5a . ISSN   2673-8597. S2CID   235551271.
  12. Döring, Nicola; Walter, Roberto (2022). "Iconography of Child Sexual Abuse in the News (Justice and Crime Reporting)". DOCA - Database of Variables for Content Analysis. doi: 10.34778/2zu . ISSN   2673-8597. S2CID   248329276.
  13. Leidecker-Sandmann, Melanie (2021). "Personalization (Election Campaign Coverage)". DOCA - Database of Variables for Content Analysis. doi: 10.34778/2g . ISSN   2673-8597. S2CID   235520184.
  14. Pfeiffer, Silvia, Stefan Fischer, and Wolfgang Effelsberg. "Automatic audio content analysis." Technical Reports 96 (1996).
  15. Grimmer, Justin, and Brandon M. Stewart. "Text as data: The promise and pitfalls of automatic content analysis methods for political texts." Political analysis 21.3 (2013): 267-297.
  16. Nasukawa, Tetsuya, and Jeonghee Yi. "Sentiment analysis: Capturing favorability using natural language processing." Proceedings of the 2nd international conference on Knowledge capture. ACM, 2003.
  17. Conway, Mike (March 2006). "The Subjective Precision of Computers: A Methodological Comparison with Human Coding in Content Analysis". Journalism & Mass Communication Quarterly. 83 (1): 186–200. doi:10.1177/107769900608300112. ISSN   1077-6990. S2CID   143292050.
  18. Weber, Robert Philip (1990). Basic Content Analysis (2nd ed.). Newbury Park, CA: Sage. p.  12. ISBN   9780803938632.
  19. Lacy, Stephen R; Riffe, Daniel (1993). "Sins of Omission and Commission in Mass Communication Quantitative Research". Journalism & Mass Communication Quarterly. 70 (1): 126–132. doi:10.1177/107769909307000114. S2CID   144076335.
  20. 1 2 Krippendorff, Klaus (2004). Content Analysis: An Introduction to Its Methodology (2nd ed.). Thousand Oaks, CA: Sage. pp. (passim). ISBN   0761915451. (On content analysis's quantitative nature, unitization and categorization, and uses by scale type).
  21. Oleinik, Anton; Popova, Irina; Kirdina, Svetlana; Shatalova, Tatyana (2014). "On the choice of measures of reliability and validity in the content-analysis of texts". Quality & Quantity. 48 (5): 2703–2718. doi:10.1007/s11135-013-9919-0. ISSN   1573-7845. S2CID   144174429.
  22. Sumpter, Randall S. (July 2001). "News about News". Journalism History. 27 (2): 64–72. doi:10.1080/00947679.2001.12062572. ISSN   0094-7679. S2CID   140499059.
  23. Lasswell, Harold (1948). "The Structure and Function of Communication in Society". In Bryson, L. (ed.). The Communication of Ideas (PDF). New York: Harper and Row. p. 216.
  24. Berelson, B. (1952). Content Analysis in Communication Research. Glencoe: Free Press. p. 18.
  25. 1 2 3 Krippendorff, Klaus (2004). Content Analysis: An Introduction to Its Methodology . California: Sage. pp.  87–89. ISBN   978-0-7619-1544-7.
  26. Timmermans, Stefan; Tavory, Iddo (2012). "Theory Construction in Qualitative Research" (PDF). Sociological Theory. 30 (3): 167–186. doi:10.1177/0735275112457914. S2CID   145177394. Archived from the original (PDF) on 2019-08-19. Retrieved 2018-12-09.
  27. Jang-Hwan Lee; Young-Gul Kim; Sung-Ho Yu (2001). "Stage model for knowledge management". Proceedings of the 34th Annual Hawaii International Conference on System Sciences. IEEE Comput. Soc. p. 10. doi:10.1109/hicss.2001.927103. ISBN   0-7695-0981-9. S2CID   34182315.
  28. 1 2 3 Holsti, Ole R. (1969). Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley. pp. 14–93. (Table 2-1, page 26).
  29. Berelson, Bernard (1952). Content Analysis in Communication Research. Glencoe, Ill: Free Press.
  30. Holsti, Ole R. (1969). Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley. pp. 15–16.
  31. Holsti, Ole R. (1969). Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley.
  32. Neuendorf, Kimberly A. (2002). The Content Analysis Guidebook. Thousand Oaks, CA: Sage. pp. 52–54. ISBN   0761919783. (On content analysis's descriptive role).
  33. Agresti, Alan (2002). Categorical Data Analysis (2nd ed.). Hoboken, NJ: Wiley. pp. 2–4. ISBN   0471360937. (On the meanings of "categorical" and other measurement scales).
  34. Delfico, Joseph F. (1996). Content Analysis: A Methodology for Structuring and Analyzing Written Material. Washington, DC: United States General Accounting Office. pp. 19–21. (Linked to a PDF).
  35. Delfico, Joseph F. (1996). Content Analysis: A Methodology for Structuring and Analyzing Written Material. Washington, DC: United States General Accounting Office. (ASCII transcription; Chapter 3:1.1, on uses according to scale type, and Appendix III, on intercoder reliability).
  36. Carney, T[homas] F[rancis] (1971). "Content Analysis: A Review Essay". Historical Methods Newsletter. 4 (2): 52–61. doi:10.1080/00182494.1971.10593939. (On content analysis's quantitative nature, unitization and categorization, and descriptive role).
  37. Hall, Calvin S.; Van de Castle, Robert L. (1966). The Content Analysis of Dreams. New York: Appleton-Century-Crofts. pp. 1–16. (Chapter 1, "The Methodology of Content Analysis," on the quantitative nature and uses of content analysis, and quoting "subjective" from page 12).
  38. Suss, Richard A. (2020). "ASPECTS, The Mismeasure of Stroke: A Metrological Investigation". OSF Preprints. doi:10.31219/osf.io/c4tkp. S2CID   242764761. (§3, §6, and §7 for the nature of, risks of, and alternative to ASPECTS, and page 76 for comparison to content analysis).
  39. Suss, Richard A.; Pinho, Marco C. (2020). "ASPECTS Distorts Infarct Volume Measurement". American Journal of Neuroradiology. 41 (5): E28. doi:10.3174/ajnr.A6485. PMC   7228155 . PMID   32241774. S2CID   214767536.
  40. Weber, Robert Philip (1990). Basic Content Analysis (2nd ed.). Newbury Park, CA: Sage. p. 17. ISBN   0803938632.
  41. Krippendorff, Klaus (1974). "Review of Thomas F. Carney, Content Analysis: A Technique for Systematic Inference from Communications". University of Pennsylvania Scholarly Commons, Annenberg School of Communication Departmental Papers. (Quote from 4th page, unnumbered).
  42. Frey, Bruce B. (2018). Content Analysis. Sage. doi:10.4135/9781506326139. ISBN   9781506326153. S2CID   4110403 . Retrieved December 16, 2019.

Further reading