The MaxDiff is a long-established theory in mathematical psychology with very specific assumptions about how people make choices: [1] it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. It may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons:
The only paired comparison that cannot be inferred is B vs. C. In a choice, like above, with four items MaxDiff questioning informs on five of six implied paired comparisons. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons.
The total amount of known relations between items, can be mathematically expressed as follows: . N represents here the total amount of items. The formula, makes it clear that the effectiveness of this method, of assuming relations, drastically decreases as N grows bigger.
In 1938 Richardson [2] introduced a choice method in which subjects reported the most alike pair of a triad and the most different pair. The component of this method involving the most different pair may be properly called "MaxDiff" in contrast to a "most-least" or "best-worst" method where both the most different pair and the direction of difference are obtained. Ennis, Mullen and Frijters (1988) [3] derived a unidimensional Thurstonian scaling model for Richardson's method of triads so that the results could be scaled under normality assumptions about the item percepts.
MaxDiff may involve multidimensional percepts, unlike most-least models that assume a unidimensional representation. MaxDiff and most-least methods belong to a class of methods that do not require the estimation of a cognitive parameter as occurs in the analysis of ratings data. This is one of the reasons for their popularity in applications. Other methods in this class include the 2- and 3-alternative forced choice methods, the triangular method which is a special case of Richardson's method, the duo-trio method and the specified and unspecified methods of tetrads. All of these methods have well developed Thurstonian scaling models as discussed recently in Ennis (2016) [4] which also includes a Thurstonian model for first-last or most-least choice and ranks with rank-induced dependencies. There are a number of possible processes through which subjects may make a most-least decision, including paired comparisons and ranking, but it is typically not known how the decision is reached.
MaxDiff and best–worst scaling (BWS or "MaxDiff surveys") have erroneously been considered synonyms. [5] Respondents can produce best-worst data in any of a number of ways, with a MaxDiff process being but one. Instead of evaluating all possible pairs (the MaxDiff model), they might choose the best from n items, the worst from the remaining n-1, or vice versa (sequential models). Or indeed they may use another method entirely. Thus it should be clear that MaxDiff is a subset of BWS; MaxDiff is BWS, but BWS is not necessarily MaxDiff. Indeed, MaxDiff might not be considered an attractive model on psychological and intuitive grounds: as the number of items increases, the number of possible pairs increases in a multiplicative fashion: n items produces n(n-1) pairs (where best-worst order matters). Assuming respondents do evaluate all possible pairs is a strong assumption. Early work did use the term MaxDiff to refer to BWS, but with Marley's return to the field, [6] correct academic terminology has been disseminated in some parts of the world.
A longest common subsequence (LCS) is the longest subsequence common to all sequences in a set of sequences. It differs from the longest common substring: unlike substrings, subsequences are not required to occupy consecutive positions within the original sequences. The problem of computing longest common subsequences is a classic computer science problem, the basis of data comparison programs such as the diff
utility, and has applications in computational linguistics and bioinformatics. It is also widely used by revision control systems such as Git for reconciling multiple changes made to a revision-controlled collection of files.
In the social sciences, scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities.
Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.
Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes that make up an individual product or service.
In psychometrics, item response theory (IRT) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult. This distinguishes IRT from, for instance, Likert scaling, in which "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments". By contrast, item response theory treats the difficulty of each item as information to be incorporated in scaling items.
A Likert scale is a psychometric scale named after its inventor, American social psychologist Rensis Likert, which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.
In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .
A personality test is a method of assessing human personality constructs. Most personality assessment instruments are in fact introspective self-report questionnaire measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception, however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales, and self-report questionnaires are highly susceptible to motivational and response distortion ranging from lack of adequate self-insight to downright dissimulation depending on the reason/motivation for the assessment being undertaken.
In the analysis of multivariate observations designed to assess subjects with respect to an attribute, a Guttman scale is a single (unidimensional) ordinal scale for the assessment of the attribute, from which the original observations may be reproduced. The discovery of a Guttman scale in data depends on their multivariate distribution's conforming to a particular structure. Hence, a Guttman scale is a hypothesis about the structure of the data, formulated with respect to a specified attribute and a specified population and cannot be constructed for any given set of observations. Contrary to a widespread belief, a Guttman scale is not limited to dichotomous variables and does not necessarily determine an order among the variables. But if variables are all dichotomous, the variables are indeed ordered by their sensitivity in recording the assessed attribute, as illustrated by Example 1.
Pairwise comparison generally is any process of comparing entities in pairs to judge which of each entity is preferred, or has a greater amount of some quantitative property, or whether or not the two entities are identical. The method of pairwise comparison is used in the scientific study of preferences, attitudes, voting systems, social choice, public choice, requirements engineering and multiagent AI systems. In psychology literature, it is often referred to as paired comparison.
Acquiescence bias, also known as agreement bias, is a category of response bias common to survey research in which respondents have a tendency to select a positive response option or indicate a positive connotation disproportionately more frequently. Respondents do so without considering the content of the question or their 'true' preference. Acquiescence is sometimes referred to as "yea-saying" and is the tendency of a respondent to agree with a statement when in doubt. Questions affected by acquiescence bias take the following format: a stimulus in the form of a statement is presented, followed by 'agree/disagree,' 'yes/no' or 'true/false' response options. For example, a respondent might be presented with the statement "gardening makes me feel happy," and would then be expected to select either 'agree' or 'disagree.' Such question formats are favoured by both survey designers and respondents because they are straightforward to produce and respond to. The bias is particularly prevalent in the case of surveys or questionnaires that employ truisms as the stimuli, such as: "It is better to give than to receive" or "Never a lender nor a borrower be". Acquiescence bias can introduce systematic errors that affect the validity of research by confounding attitudes and behaviours with the general tendency to agree, which can result in misguided inference. Research suggests that the proportion of respondents who carry out this behaviour is between 10% and 20%.
Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices in order to infer positions of the items on some relevant latent scale. Indeed many alternative models exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers' willingness to pay for quality improvements in multiple dimensions.
The theory of conjoint measurement is a general, formal theory of continuous quantity. It was independently discovered by the French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey.
Anton K. Formann was an Austrian research psychologist, statistician, and psychometrician. He is renowned for his contributions to item response theory, latent class analysis, the measurement of change, mixture models, categorical data analysis, and quantitative methods for research synthesis (meta-analysis).
A Thurstonian model is a stochastic transitivity model with latent variables for describing the mapping of some continuous scale onto discrete, possibly ordered categories of response. In the model, each of these categories of response corresponds to a latent variable whose value is drawn from a normal distribution, independently of the other response variables and with constant variance. Developments over the last two decades, however, have led to Thurstonian models that allow unequal variance and non zero covariance terms. Thurstonian models have been used as an alternative to generalized linear models in analysis of sensory discrimination tasks. They have also been used to model long-term memory in ranking tasks of ordered alternatives, such as the order of the amendments to the US Constitution. Their main advantage over other models ranking tasks is that they account for non-independence of alternatives. Ennis provides a comprehensive account of the derivation of Thurstonian models for a wide variety of behavioral tasks including preferential choice, ratings, triads, tetrads, dual pair, same-different and degree of difference, ranks, first-last choice, and applicability scoring. In Chapter 7 of this book, a closed form expression, derived in 1988, is given for a Euclidean-Gaussian similarity model that provides a solution to the well-known problem that many Thurstonian models are computationally complex often involving multiple integration. In Chapter 10, a simple form for ranking tasks is presented that only involves the product of univariate normal distribution functions and includes rank-induced dependency parameters. A theorem is proven that shows that the particular form of the dependency parameters provides the only way that this simplification is possible. Chapter 6 links discrimination, identification and preferential choice through a common multivariate model in the form of weighted sums of central F distribution functions and allows a general variance-covariance matrix for the items.
Best–worst scaling (BWS) techniques involve choice modelling and were invented by Jordan Louviere in 1987 while on the faculty at the University of Alberta. In general with BWS, survey respondents are shown a subset of items from a master list and are asked to indicate the best and worst items. The task is repeated a number of times, varying the particular subset of items in a systematic way, typically according to a statistical design. Analysis is typically conducted, as with DCEs more generally, assuming that respondents makes choices according to a random utility model (RUM). RUMs assume that an estimate of how much a respondent prefers item A over item B is provided by how often item A is chosen over item B in repeated choices. Thus, choice frequencies estimate the utilities on the relevant latent scale. BWS essentially aims to provide more choice information at the lower end of this scale without having to ask additional questions that are specific to lower ranked items.
Delroy L. Paulhus is a personality psychology researcher and professor. He received his doctorate from Columbia University and has worked at the University of California, Berkeley and the University of California, Davis. Currently, Paulhus is a professor of psychology at the University of British Columbia in Vancouver, Canada where he teaches undergraduate and graduate courses. He is best known for being the co creator of the dark triad, along with fellow researcher Kevin Williams.
The Mokken scale is a psychometric method of data reduction. A Mokken scale is a unidimensional scale that consists of hierarchically-ordered items that measure the same underlying, latent concept. This method is named after the political scientist Rob Mokken who suggested it in 1971.
The theory of basic human values is a theory of cross-cultural psychology and universal values that was developed by Shalom H. Schwartz. The theory extends previous cross-cultural communication frameworks such as Hofstede's cultural dimensions theory. Schwartz identifies ten basic human values, each distinguished by their underlying motivation or goal, and he explains how people in all cultures recognize them. There are two major methods for measuring these ten basic values: the Schwartz Value Survey and the Portrait Values Questionnaire.
Balanced number partitioning is a variant of multiway number partitioning in which there are constraints on the number of items allocated to each set. The input to the problem is a set of n items of different sizes, and two integers m, k. The output is a partition of the items into m subsets, such that the number of items in each subset is at most k. Subject to this, it is required that the sums of sizes in the m subsets are as similar as possible.