Pairwise comparison generally is any process of comparing entities in pairs to judge which of each entity is preferred, or has a greater amount of some quantitative property, or whether or not the two entities are identical. The method of pairwise comparison is used in the scientific study of preferences, attitudes, voting systems, social choice, public choice, requirements engineering and multiagent AI systems. In psychology literature, it is often referred to as paired comparison.
Prominent psychometrician L. L. Thurstone first introduced a scientific approach to using pairwise comparisons for measurement in 1927, which he referred to as the law of comparative judgment. Thurstone linked this approach to psychophysical theory developed by Ernst Heinrich Weber and Gustav Fechner. Thurstone demonstrated that the method can be used to order items along a dimension such as preference or importance using an interval-type scale.
Mathematician Ernst Zermelo (1929) first described a model for pairwise comparisons for chess ranking in incomplete tournaments, which serves as the basis (even though not credited for a while) for methods such as the Elo rating system and is equivalent to the Bradley–Terry model that was proposed in 1952.
If an individual or organization expresses a preference between two mutually distinct alternatives, this preference can be expressed as a pairwise comparison. If the two alternatives are x and y, the following are the possible pairwise comparisons:
The agent prefers x over y: "x > y" or "xPy"
The agent prefers y over x: "y > x" or "yPx"
The agent is indifferent between both alternatives: "x = y" or "xIy"
In terms of modern psychometric theory probabilistic models, which include Thurstone's approach (also called the law of comparative judgment), the Bradley–Terry–Luce (BTL) model, and general stochastic transitivity models, [1] are more aptly regarded as measurement models. The Bradley–Terry–Luce (BTL) model is often applied to pairwise comparison data to scale preferences. The BTL model is identical to Thurstone's model if the simple logistic function is used. Thurstone used the normal distribution in applications of the model. The simple logistic function varies by less than 0.01 from the cumulative normal ogive across the range, given an arbitrary scale factor.
In the BTL model, the probability that object j is judged to have more of an attribute than object i is:
where is the scale location of object ; is the logistic function (the inverse of the logit). For example, the scale location might represent the perceived quality of a product, or the perceived weight of an object.
The BTL model, the Thurstonian model as well as the Rasch model for measurement are all closely related and belong to the same class of stochastic transitivity.
Thurstone used the method of pairwise comparisons as an approach to measuring perceived intensity of physical stimuli, attitudes, preferences, choices, and values. He also studied implications of the theory he developed for opinion polls and political voting (Thurstone, 1959).
Irish research startup OpinionX launched a probabilistic pairwise comparison tool in 2020 which uses a Glicko-style Bayesian rating system along with a weighted selection algorithm to select a subset of statements from the overall list for each participant to vote on. [2]
For a given decision agent, if the information, objective, and alternatives used by the agent remain constant, then it is generally assumed that pairwise comparisons over those alternatives by the decision agent are transitive. Most agree upon what transitivity is, though there is debate about the transitivity of indifference. The rules of transitivity are as follows for a given decision agent.
This corresponds to (xPy or xIy) being a total preorder, P being the corresponding strict weak order, and I being the corresponding equivalence relation.
Probabilistic models also give rise to stochastic variants of transitivity, all of which can be verified to satisfy (non-stochastic) transitivity within the bounds of errors of estimates of scale locations of entities. Thus, decisions need not be deterministically transitive in order to apply probabilistic models. However, transitivity will generally hold for a large number of comparisons if models such as the BTL can be effectively applied.
Using a transitivity test [3] one can investigate whether a data set of pairwise comparisons contains a higher degree of transitivity than expected by chance.
Some contend that indifference is not transitive. Consider the following example. Suppose you like apples and you prefer apples that are larger. Now suppose there exists an apple A, an apple B, and an apple C which have identical intrinsic characteristics except for the following. Suppose B is larger than A, but it is not discernible without an extremely sensitive scale. Further suppose C is larger than B, but this also is not discernible without an extremely sensitive scale. However, the difference in sizes between apples A and C is large enough that you can discern that C is larger than A without a sensitive scale. In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd.
You are confronted with the three apples in pairs without the benefit of a sensitive scale. Therefore, when presented A and B alone, you are indifferent between apple A and apple B; and you are indifferent between apple B and apple C when presented B and C alone. However, when the pair A and C are shown, you prefer C over A.
If pairwise comparisons are in fact transitive in respect to the four mentioned rules, then pairwise comparisons for a list of alternatives (A1, A2, A3, ..., An−1, and An) can take the form:
For example, if there are three alternatives a, b, and c, then the possible preference orders are:
If the number of alternatives is n, and indifference is not allowed, then the number of possible preference orders for any given n-value is n!. If indifference is allowed, then the number of possible preference orders is the number of total preorders. It can be expressed as a function of n:
where S2(n, k) is the Stirling number of the second kind.
One important application of pairwise comparisons is the widely used Analytic Hierarchy Process, a structured technique for helping people deal with complex decisions. It uses pairwise comparisons of tangible and intangible factors to construct ratio scales that are useful in making important decisions. [4] [5]
Another important application is the Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA) method. [6] The method involves the decision-maker repeatedly pairwise comparing and ranking alternatives defined on two criteria or attributes at a time and involving a trade-off, and then, if the decision-maker chooses to continue, pairwise comparisons of alternatives defined on successively more criteria. From the pairwise rankings, the relative importance of the criteria to the decision-maker, represented as weights, is determined.
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
Arrow's impossibility theorem, the general possibility theorem or Arrow's paradox is an impossibility theorem in social choice theory that states that when voters have three or more distinct alternatives (options), no ranked voting electoral system can convert the ranked preferences of individuals into a community-wide ranking while also meeting the specified set of criteria: unrestricted domain, non-dictatorship, Pareto efficiency, and independence of irrelevant alternatives. The theorem is often cited in discussions of voting theory as it is further interpreted by the Gibbard–Satterthwaite theorem. The theorem is named after economist and Nobel laureate Kenneth Arrow, who demonstrated the theorem in his doctoral thesis and popularized it in his 1951 book Social Choice and Individual Values. The original paper was titled "A Difficulty in the Concept of Social Welfare".
The independence of irrelevant alternatives (IIA), also known as binary independence or the independence axiom, is an axiom of decision theory and the social sciences that describes a necessary condition for rational behavior. The axiom says that adding "pointless" (rejected) options should not affect behavior. This is sometimes explained with a short story by philosopher Sidney Morgenbesser:
Morgenbesser, ordering dessert, is told by a waitress that he can choose between blueberry or apple pie. He orders apple. Soon the waitress comes back and explains cherry pie is also an option. Morgenbesser replies "In that case, I'll have blueberry."
In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
The Schulze method is an electoral system developed in 1997 by Markus Schulze that selects a single winner using votes that express preferences. The method can also be used to create a sorted list of winners. The Schulze method is also known as Schwartz Sequential dropping (SSD), cloneproof Schwartz sequential dropping (CSSD), the beatpath method, beatpath winner, path voting, and path winner. The Schulze method is a Condorcet method, which means that if there is a candidate who is preferred by a majority over every other candidate in pairwise comparisons, then this candidate will be the winner when the Schulze method is applied.
In economics, an ordinal utility function is a function representing the preferences of an agent on an ordinal scale. Ordinal utility theory claims that it is only meaningful to ask which option is better than the other, but it is meaningless to ask how much better it is or how good it is. All of the theory of consumer decision-making under conditions of certainty can be, and typically is, expressed in terms of ordinal utility.
Revealed preference theory, pioneered by economist Paul Anthony Samuelson in 1938, is a method of analyzing choices made by individuals, mostly used for comparing the influence of policies on consumer behavior. Revealed preference models assume that the preferences of consumers can be revealed by their purchasing habits.
The Rasch model, named after Georg Rasch, is a psychometric model for analyzing categorical data, such as answers to questions on a reading assessment or questionnaire responses, as a function of the trade-off between the respondent's abilities, attitudes, or personality traits, and the item difficulty. For example, they may be used to estimate a student's reading ability or the extremity of a person's attitude to capital punishment from responses on a questionnaire. In addition to psychometrics and educational research, the Rasch model and its extensions are used in other areas, including the health profession, agriculture, and market research.
The law of comparative judgment was conceived by L. L. Thurstone. In modern-day terminology, it is more aptly described as a model that is used to obtain measurements from any process of pairwise comparison. Examples of such processes are the comparisons of perceived intensity of physical stimuli, such as the weights of objects, and comparisons of the extremity of an attitude expressed within statements, such as statements about capital punishment. The measurements represent how we perceive entities, rather than measurements of actual physical properties. This kind of measurement is the focus of psychometrics and psychophysics.
In Itô calculus, the Euler–Maruyama method is a method for the approximate numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ordinary differential equations to stochastic differential equations. It is named after Leonhard Euler and Gisiro Maruyama. Unfortunately, the same generalization cannot be done for any arbitrary deterministic method.
In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori N. Milstein who first published it in 1974.
In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs.
Peridynamics is a non-local formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures. Originally, bond-based peridynamic has been introduced, wherein, internal interaction forces between a material point and all the other ones with which it can interact, are modeled as a central forces field. This type of force fields can be imagined as a mesh of bonds connecting each point of the body with every other interacting point within a certain distance which depends on material property, called peridynamic horizon. Later, to overcome bond-based framework limitations for the material Poisson’s ratio, state-base peridynamics, has been formulated. Its characteristic feature is that the force exchanged between a point and another one is influenced by the deformation state of all other bonds relative to its interaction zone.
In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity. It is a way of formalising the idea that an estimator should have certain intuitively appealing qualities. Strictly speaking, "invariant" would mean that the estimates themselves are unchanged when both the measurements and the parameters are transformed in a compatible way, but the meaning has been extended to allow the estimates to change in appropriate ways with such transformations. The term equivariant estimator is used in formal mathematical contexts that include a precise description of the relation of the way the estimator changes in response to changes to the dataset and parameterisation: this corresponds to the use of "equivariance" in more general mathematics.
Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).
In economics, and in other social sciences, preference refers to an order by which an agent, while in search of an "optimal choice", ranks alternatives based on their respective utility. Preferences are evaluations that concern matters of value, in relation to practical reasoning. Individual preferences are determined by taste, need, ..., as opposed to price, availability or personal income. Classical economics assumes that people act in their best (rational) interest. In this context, rationality would dictate that, when given a choice, an individual will select an option that maximizes their self-interest. But preferences are not always transitive, both because real humans are far from always being rational and because in some situations preferences can form cycles, in which case there exists no well-defined optimal choice. An example of this is Efron dice.
In multiple criteria decision aiding (MCDA), multicriteria classification involves problems where a finite set of alternative actions should be assigned into a predefined set of preferentially ordered categories (classes). For example, credit analysts classify loan applications into risk categories, customers rate products and classify them into attractiveness groups, candidates for a job position are evaluated and their applications are approved or rejected, technical systems are prioritized for inspection on the basis of their failure risk, clinicians classify patients according to the extent to which they have a complex disease or not, etc.
A Thurstonian model is a stochastic transitivity model with latent variables for describing the mapping of some continuous scale onto discrete, possibly ordered categories of response. In the model, each of these categories of response corresponds to a latent variable whose value is drawn from a normal distribution, independently of the other response variables and with constant variance. Developments over the last two decades, however, have led to Thurstonian models that allow unequal variance and non zero covariance terms. Thurstonian models have been used as an alternative to generalized linear models in analysis of sensory discrimination tasks. They have also been used to model long-term memory in ranking tasks of ordered alternatives, such as the order of the amendments to the US Constitution. Their main advantage over other models ranking tasks is that they account for non-independence of alternatives. Ennis provides a comprehensive account of the derivation of Thurstonian models for a wide variety of behavioral tasks including preferential choice, ratings, triads, tetrads, dual pair, same-different and degree of difference, ranks, first-last choice, and applicability scoring. In Chapter 7 of this book, a closed form expression, derived in 1988, is given for a Euclidean-Gaussian similarity model that provides a solution to the well-known problem that many Thurstonian models are computationally complex often involving multiple integration. In Chapter 10, a simple form for ranking tasks is presented that only involves the product of univariate normal distribution functions and includes rank-induced dependency parameters. A theorem is proven that shows that the particular form of the dependency parameters provides the only way that this simplification is possible. Chapter 6 links discrimination, identification and preferential choice through a common multivariate model in the form of weighted sums of central F distribution functions and allows a general variance-covariance matrix for the items.
Stochastic transitivity models are stochastic versions of the transitivity property of binary relations studied in mathematics. Several models of stochastic transitivity exist and have been used to describe the probabilities involved in experiments of paired comparisons, specifically in scenarios where transitivity is expected, however, empirical observations of the binary relation is probabilistic. For example, players' skills in a sport might be expected to be transitive, i.e. "if player A is better than B and B is better than C, then player A must be better than C"; however, in any given match, a weaker player might still end up winning with a positive probability. Tightly matched players might have a higher chance of observing this inversion while players with large differences in their skills might only see these inversions happen seldom. Stochastic transitivity models formalize such relations between the probabilities and the underlying transitive relation.
Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule.