Fleiss' kappa

Last updated

Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as Cohen's kappa, which only work when assessing the agreement between not more than two raters or the intra-rater reliability (for one appraiser versus themself). The measure calculates the degree of agreement in classification over that which would be expected by chance.

Contents

Fleiss' kappa can be used with binary or nominal-scale. It can also be applied to ordinal data (ranked data): the MiniTab online documentation [1] gives an example. However, this document notes: "When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone." Keep in mind however, that Kendall rank coefficients are only appropriate for rank data.

Introduction

Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4] Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals. [3] That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F. The condition of random sampling among raters makes Fleiss' kappa not suited for cases where all raters rate all patients. [5]

Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, , can be defined as,

(1)

The factor gives the degree of agreement that is attainable above chance, and, gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then . If there is no agreement among the raters (other than what would be expected by chance) then .

An example of using Fleiss' kappa may be the following: consider several psychiatrists who are asked to look at ten patients. For each patient, 14 psychiatrists give one of possibly five diagnoses. These are compiled into a matrix, and Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.

Definition

Let N be the total number of elements, let n be the number of ratings per element, and let k be the number of categories into which assignments are made. The elements are indexed by i = 1, ..., N and the categories are indexed by j = 1, ..., k. Let nij represent the number of raters who assigned the i-th element to the j-th category.

First calculate pj, the proportion of all assignments which were to the j-th category:

(2)

Now calculate , the extent to which raters agree for the i-th element (i.e., compute how many rater-rater pairs are in agreement, relative to the number of all possible rater-rater pairs):

(3)

Note that is bound between 0, when ratings are assigned equally over all categories, and 1, when all ratings are assigned to a single category.

Now compute , the mean of the 's, and , which go into the formula for :

(4)

(5)

Worked example

Table of values for computing the worked example
12345
10000141.000
2026420.253
3003560.308
4039200.440
5228110.330
6770000.462
7326300.242
8253220.176
9652100.286
10022370.286
Total2028392132
0.1430.2000.2790.1500.229

In the following example, for each of ten "subjects" () fourteen raters (), sampled from a larger group, assign a total of five categories (). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category.

In the following table, given that , , and . The value is the proportion of all assignments that were made to the th category. For example, taking the first column and taking the second row,

In order to calculate , we need to know the sum of ,

Over the whole sheet,

Interpretation

Landis & Koch (1977) gave the following table for interpreting values for a 2-annotator 2-class example. [6] This table is however by no means universally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, [7] as the number of categories and subjects will affect the magnitude of the value. For example, the kappa is higher when there are fewer categories. [8]

ConditionInterpretation
 Subjective example:
only for two annotators,
on two classes. [6]
< 0Poor agreement
0.01 0.20Slight agreement
0.21 0.40Fair agreement
0.41 0.60Moderate agreement
0.61 0.80Substantial agreement
0.81 1.00Almost perfect agreement

Tests of significance

Statistical packages can calculate a standard score (Z-score) for Cohen's kappa or Fleiss's Kappa, which can be converted into a P-value. However, even when the P value reaches the threshold of statistical significance (typically less than 0.05), it only indicates that the agreement between raters is significantly better than would be expected by chance. The p-value does not tell, by itself, whether the agreement is good enough to have high predictive value.

See also

Related Research Articles

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because .

In probability theory and statistics, the cumulantsκn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.

The Gram–Charlier A series, and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants. The series are the same; but, the arrangement of terms differ. The key idea of these expansions is to write the characteristic function of the distribution whose probability density function f is to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover f through the inverse Fourier transform.

In psychometrics, the Kuder–Richardson formulas, first published in 1937, are a measure of internal consistency reliability for measures with dichotomous choices. They were developed by Kuder and Richardson.

Cohen's kappa coefficient is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.

von Mises distribution Probability distribution on the circle

In probability theory and directional statistics, the von Mises distribution is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.

Kendall's W is a non-parametric statistic for rank correlation. It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability. Kendall's W ranges from 0 to 1.

The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas. It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions.

In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. Examples include the variation ratio or the information entropy.

Scott's pi is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem in natural language processing, and the goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance.

<span class="mw-page-title-main">Intraclass correlation</span> Descriptive statistic

In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type of correlation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations.

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders".

Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view.

Krippendorff's alpha coefficient, named after academic Klaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, alpha has been used in content analysis where textual units are categorized by trained readers, in counseling and survey research where experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or in observational studies where unstructured happenings are recorded for subsequent analysis.

In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven Ricci scalars which consist of three real scalars , three complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.

Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons, electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes are governed by the rates at which various related physical phenomena occur, such as the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level to macroscale are the laws of thermodynamics, including conservation of energy.

The coupling coefficient of resonators is a dimensionless value that characterizes interaction of two resonators. Coupling coefficients are used in resonator filter theory. Resonators may be both electromagnetic and acoustic. Coupling coefficients together with resonant frequencies and external quality factors of resonators are the generalized parameters of filters. In order to adjust the frequency response of the filter it is sufficient to optimize only these generalized parameters.

In thermodynamics, thermal pressure is a measure of the relative pressure change of a fluid or a solid as a response to a temperature change at constant volume. The concept is related to the Pressure-Temperature Law, also known as Amontons's law or Gay-Lussac's law.

References

  1. Kappa statistics for Attribute Agreement Analysis, MiniTab Inc , retrieved Jan 22, 2019.
  2. Scott, W. (1955), "Reliability of content analysis: The case of nominal scale coding", Public Opinion Quarterly, 19 (3): 321–325, doi:10.1086/266577, JSTOR   2746450 .
  3. 1 2 Fleiss, J. L. (1971), "Measuring nominal scale agreement among many raters", Psychological Bulletin, 76 (5): 378–382, doi:10.1037/h0031619 .
  4. Powers, David M. W. (2012), The Problem with Kappa, vol. Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop., Association for Computational Linguistics.
  5. Hallgren, Kevin A. (2012), "Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial", Tutorials in Quantitative Methods for Psychology, 8 (1): 3–34, doi:10.20982/tqmp.08.1.p023, PMID   22833776 .
  6. 1 2 Landis, J. R.; Koch, G. G. (1977), "The measurement of observer agreement for categorical data", Biometrics, 33 (1): 159–174, doi:10.2307/2529310, JSTOR   2529310, PMID   843571 .
  7. Gwet, K. L. (2014), "Chapter 6. (Gaithersburg: Advanced Analytics, LLC)", Handbook of Inter-Rater Reliability (PDF) (4th ed.), Advanced Analytics, LLC, ISBN   978-0970806284 .
  8. Sim, J.; Wright, C. C. (2005), "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements", Physical Therapy, 85 (3): 257–268, doi:10.1093/ptj/85.3.257 .

Further reading