Fleiss' kappa

Last updated

Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as Cohen's kappa, which only work when assessing the agreement between not more than two raters or the intra-rater reliability (for one appraiser versus themself). The measure calculates the degree of agreement in classification over that which would be expected by chance.

Contents

Fleiss' kappa can be used with binary or nominal-scale. It can also be applied to ordinal data (ranked data): the MiniTab online documentation [1] gives an example. However, this document notes: "When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone." Keep in mind however, that Kendall rank coefficients are only appropriate for rank data.

Introduction

Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4] Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals. [3] That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F. The condition of random sampling among raters makes Fleiss' kappa not suited for cases where all raters rate all patients. [5]

Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, , can be defined as,

(1)

The factor gives the degree of agreement that is attainable above chance, and, gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then . If there is no agreement among the raters (other than what would be expected by chance) then .

An example of using Fleiss' kappa may be the following: consider several psychiatrists who are asked to look at ten patients. For each patient, 14 psychiatrists give one of possibly five diagnoses. These are compiled into a matrix, and Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.

Definition

Let N be the total number of subjects, let n be the number of ratings per subject, and let k be the number of categories into which assignments are made. The subjects are indexed by i = 1, ..., N and the categories are indexed by j = 1, ..., k. Let nij represent the number of raters who assigned the i-th subject to the j-th category.

First calculate pj, the proportion of all assignments which were to the j-th category:

(2)

Now calculate , the extent to which raters agree for the i-th subject (i.e., compute how many rater-rater pairs are in agreement, relative to the number of all possible rater-rater pairs):

(3)

Note that is bound between 0, when ratings are assigned equally over all categories, and 1, when all ratings are assigned to a single category.

Now compute , the mean of the 's, and , which go into the formula for :

(4)

(5)

Worked example

Table of values for computing the worked example
12345
10000141.000
2026420.253
3003560.308
4039200.440
5228110.330
6770000.462
7326300.242
8253220.176
9652100.286
10022370.286
Total2028392132
0.1430.2000.2790.1500.229

In the following example, for each of ten "subjects" () fourteen raters (), sampled from a larger group, assign a total of five categories (). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category.

In the following table, given that , , and . The value is the proportion of all assignments that were made to the th category. For example, taking the first column

and taking the second row,

In order to calculate , we need to know the sum of ,

Over the whole sheet,

Interpretation

Landis & Koch (1977) gave the following table for interpreting values for a 2-annotator 2-class example. [6] This table is however by no means universally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, [7] as the number of categories and subjects will affect the magnitude of the value. For example, the kappa is higher when there are fewer categories. [8]

ConditionInterpretation
 Subjective example:
only for two annotators,
on two classes. [6]
< 0Poor agreement
0.01 0.20Slight agreement
0.21 0.40Fair agreement
0.41 0.60Moderate agreement
0.61 0.80Substantial agreement
0.81 1.00Almost perfect agreement

Tests of significance

Statistical packages can calculate a standard score (Z-score) for Cohen's kappa or Fleiss's Kappa, which can be converted into a P-value. However, even when the P value reaches the threshold of statistical significance (typically less than 0.05), it only indicates that the agreement between raters is significantly better than would be expected by chance. The p-value does not tell you, by itself, whether the agreement is good enough to have high predictive value.

See also

Related Research Articles

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

In probability theory and statistics, the cumulantsκn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa.

In psychometrics, the Kuder–Richardson formulas, first published in 1937, are a measure of internal consistency reliability for measures with dichotomous choices. They were developed by Kuder and Richardson.

Cohen's kappa coefficient is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.

von Mises distribution Probability distribution on the circle

In probability theory and directional statistics, the von Mises distribution is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.

<span class="mw-page-title-main">Debye–Hückel equation</span> Electrochemical equation

The chemists Peter Debye and Erich Hückel noticed that solutions that contain ionic solutes do not behave ideally even at very low concentrations. So, while the concentration of the solutes is fundamental to the calculation of the dynamics of a solution, they theorized that an extra factor that they termed gamma is necessary to the calculation of the activities of the solution. Hence they developed the Debye–Hückel equation and Debye–Hückel limiting law. The activity is only proportional to the concentration and is altered by a factor known as the activity coefficient . This factor takes into account the interaction energy of ions in solution.

In directional statistics, the von Mises–Fisher distribution, is a probability distribution on the -sphere in . If the distribution reduces to the von Mises distribution on the circle.

Kendall's W is a non-parametric statistic for rank correlation. It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability. Kendall's W ranges from 0 to 1.

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

In statistics, inter-rater reliability is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

Mixing patterns refer to systematic tendencies of one type of nodes in a network to connect to another type. For instance, nodes might tend to link to others that are very similar or very different. This feature is common in many social networks, although it also appears sometimes in non-social networks. Mixing patterns are closely related to assortativity; however, for the purposes of this article, the term is used to refer to assortative or disassortative mixing based on real-world factors, either topological or sociological.

An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. Examples include the variation ratio or the information entropy.

Scott's pi is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem in natural language processing, and the goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance.

<span class="mw-page-title-main">Intraclass correlation</span> Descriptive statistic

In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type of correlation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations.

<span class="mw-page-title-main">Gravitational lensing formalism</span>

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

Krippendorff's alpha coefficient, named after academic Klaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, alpha has been used in content analysis where textual units are categorized by trained readers, in counseling and survey research where experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or in observational studies where unstructured happenings are recorded for subsequent analysis.

A non-expanding horizon (NEH) is an enclosed null surface whose intrinsic structure is preserved. An NEH is the geometric prototype of an isolated horizon which describes a black hole in equilibrium with its exterior from the quasilocal perspective. It is based on the concept and geometry of NEHs that the two quasilocal definitions of black holes, weakly isolated horizons and isolated horizons, are developed.

In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven Ricci scalars which consist of three real scalars , three complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.

The coupling coefficient of resonators is a dimensionless value that characterizes interaction of two resonators. Coupling coefficients are used in resonator filter theory. Resonators may be both electromagnetic and acoustic. Coupling coefficients together with resonant frequencies and external quality factors of resonators are the generalized parameters of filters. In order to adjust the frequency response of the filter it is sufficient to optimize only these generalized parameters.

<span class="mw-page-title-main">Wrapped asymmetric Laplace distribution</span>

In probability theory and directional statistics, a wrapped asymmetric Laplace distribution is a wrapped probability distribution that results from the "wrapping" of the asymmetric Laplace distribution around the unit circle. For the symmetric case (asymmetry parameter κ = 1), the distribution becomes a wrapped Laplace distribution. The distribution of the ratio of two circular variates (Z) from two different wrapped exponential distributions will have a wrapped asymmetric Laplace distribution. These distributions find application in stochastic modelling of financial data.

References

  1. Kappa statistics for Attribute Agreement Analysis, MiniTab Inc , retrieved Jan 22, 2019.
  2. Scott, W. (1955), "Reliability of content analysis: The case of nominal scale coding", Public Opinion Quarterly, 19 (3): 321–325, doi:10.1086/266577, JSTOR   2746450 .
  3. 1 2 Fleiss, J. L. (1971), "Measuring nominal scale agreement among many raters", Psychological Bulletin, 76 (5): 378–382, doi:10.1037/h0031619 .
  4. Powers, David M. W. (2012), The Problem with Kappa, vol. Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop., Association for Computational Linguistics.
  5. Hallgren, Kevin A. (2012), "Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial", Tutorials in Quantitative Methods for Psychology, 8 (1): 3–34, doi:10.20982/tqmp.08.1.p023, PMID   22833776 .
  6. 1 2 Landis, J. R.; Koch, G. G. (1977), "The measurement of observer agreement for categorical data", Biometrics, 33 (1): 159–174, doi:10.2307/2529310, JSTOR   2529310, PMID   843571 .
  7. Gwet, K. L. (2014), "Chapter 6. (Gaithersburg: Advanced Analytics, LLC)", Handbook of Inter-Rater Reliability (PDF) (4th ed.), Advanced Analytics, LLC, ISBN   978-0970806284 .
  8. Sim, J.; Wright, C. C. (2005), "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements", Physical Therapy, 85 (3): 257–268, doi:10.1093/ptj/85.3.257 .

Further reading