This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
In the analysis of multivariate observations designed to assess subjects with respect to an attribute, a Guttman scale (named after Louis Guttman) is a single (unidimensional) ordinal scale for the assessment of the attribute, from which the original observations may be reproduced. The discovery of a Guttman scale in data depends on their multivariate distribution's conforming to a particular structure (see below). Hence, a Guttman scale is a hypothesis about the structure of the data, formulated with respect to a specified attribute and a specified population and cannot be constructed for any given set of observations. Contrary to a widespread belief, a Guttman scale is not limited to dichotomous variables and does not necessarily determine an order among the variables. But if variables are all dichotomous, the variables are indeed ordered by their sensitivity in recording the assessed attribute, as illustrated by Example 1.
Example 1: Dichotomous variables
A Guttman scale may be hypothesized for the following five questions that concern the attribute "acceptance of social contact with immigrants" (based on the Bogardus social distance scale), presented to a suitable population:
A positive response by a particular respondent to any question in this list, suggests positive responses by that respondent to all preceding questions in this list. Hence one could expect to obtain only the responses listed in the shaded part (columns 1–5) of Table 1.
Every row in the shaded part of Table 1 (columns 1–5) is the response profile of any number (≥ 0) of respondents. Every profile in this table indicates acceptance of immigrants in all senses indicated by the previous profile, plus an additional sense in which immigrants are accepted. If, in a large number of observations, only the profiles listed in Table 1 are observed, then the Guttman scale hypothesis is supported, and the values of the scale (last column of Table 1) have the following properties:
Guttman scale, if supported by data, is useful for efficiently assessing subjects (respondents, testees or any collection of investigated objects) on a one-dimensional scale with respect to the specified attribute. Typically, Guttman scales are found with respect to attributes that are narrowly defined.
While other scaling techniques (e.g., Likert scale) produce a single scale by summing up respondents' scores—a procedure that assumes, often without justification, that all observed variables have equal weights — Guttman scale avoids weighting the observed variables; thus 'respecting' data for what they are. If a Guttman scale is confirmed, the measurement of the attribute is intrinsically one-dimensional; the unidimensionality is not forced by summation or averaging. This feature renders it appropriate for the construction of replicable scientific theories and meaningful measurements, as explicated in facet theory.
Given a data set of N subjects observed with respect to n ordinal variables, each having any finite number (≥2) of numerical categories ordered by increasing strength of a pre-specified attribute, let aij be the score obtained by subject i on variable j, and define the list of scores that subject i obtained on the n variables, ai=ai1...ain , to be the profile of subject i. (The number of categories may be different in different variables; and the order of the variables in the profiles is not important but should be fixed).
Define:
Two profiles, as and at are equal, denoted as=at, iff asj=atj for all j=1...n
Profile as is greater than Profile at, denoted as>at, iff asj ≥ atj for all j=1...n and asj' > atj' for at least one variable, j'.
Profiles as and at are comparable, denoted asSat, iff as=at; or as>at; or at>as
Profiles as and at are incomparable, denoted as$at, if they are not comparable (that is, for at least one variable, j', asj' > atj' and for at least one other variable, j'', atj'' > asj''.
For data sets where the categories of all variables are similarly ordered numerically (from high to low or from low to high) with respect to a given attribute, Guttman scale is defined simply thus:
Definition:Guttman scale is a data set in which all profile-pairs are comparable.
Consider the following four variables that assess arithmetic skills among a population P of pupils:
V1: Can pupil (p) perform addition of numbers? No=1; Yes, but only of two-digit numbers=2; Yes=3.
V2: Does pupil (p) know the (1-10) multiplication table? No=1; Yes=2.
V3: Can pupil (p) perform multiplication of numbers? No=1; Yes, but only of two-digit numbers=2; Yes=3.
V4: Can pupil (p) perform long division? No=1; Yes=2.
Data collected for the above four variables among a population of school children may be hypothesized to exhibit the Guttman scale shown below in Table 2:
Table 2. Data of the four ordinal arithmetic skill variables are hypothesized to form a Guttman scale
V1 | V2 | V3 | V4 | Possible Scale score |
1 | 1 | 1 | 1 | 4 |
2 | 1 | 1 | 1 | 5 |
2 | 2 | 1 | 1 | 6 |
3 | 2 | 1 | 1 | 7 |
3 | 2 | 2 | 1 | 8 |
3 | 2 | 3 | 1 | 9 |
3 | 2 | 3 | 2 | 10 |
The set profiles hypothesized to occur (shaded part in Table 2) illustrates the defining feature of the Guttman scale, namely, that any pair of profiles are comparable. Here too, if the hypothesis is confirmed, a single scale-score reproduces a subject's responses in all the variables observed.
Any ordered set of numbers could serve as scale. In this illustration we chose the sum of profile-scores. According to facet theory, only in data that conform to a Guttman scale such a summation may be justified.
In practice, perfect ("deterministic") Guttman scales are rare, but approximate ones have been found in specific populations with respect to attributes such as religious practices, narrowly defined domains of knowledge, specific skills, and ownership of household appliances. [1] When data do not conform to a Guttman scale, they may either represent a Guttman scale with noise (and treated stochastically [1] ), or they may have a more complex structure requiring multiple scaling for identifying the scales intrinsic to them.
The extent to which a data set conforms to a Guttman scale can be estimated from the coefficient of reproducibility [2] [3] of which there are a few versions, depending on statistical assumptions and limitations. Guttman's original definition of the reproducibility coefficient, CR is simply 1 minus the ratio of the number of errors to the number of entries in the data set. And, to ensure that there is a range of responses (not the case if all respondents only endorsed one item) the coefficient of scalability is used. [4]
In Guttman scaling is found the beginnings of item response theory which, in contrast to classical test theory, acknowledges that items in questionnaires do not all have the same level of difficulty. Non-deterministic (i.e., stochastic) models have been developed such as the Mokken scale and the Rasch model. Guttman scale has been generalized to the theory and procedures of "multiple scaling" which identifies the minimum number of scales needed for satisfactory reproducibility.
As a procedure that ties substantive contents with logical aspects of data, Guttman scale heralded the advent of facet theory developed by Louis Guttman and his associates.
Guttman's [3] original definition of a scale allows also for the exploratory scaling analysis of qualitative variables (nominal variables, or ordinal variables that do not necessarily belong to a pre-specified common attribute). This definition of Guttman scale relies on the prior definition of a simple function .
For a totally ordered set X, say, 1,2,...,m , and another finite set, Y, with k elements k ≤ m, a function from X to Y is a simple function if X can be partitioned into k intervals which are in a one-to-one correspondence with the values of Y .
A Guttman scale may then be defined for a data set of n variables, with the jth variable having kj (qualitative, not necessarily ordered) categories, thus:
Definition: Guttman scale is a data set for which there exists an ordinal variable, X, with a finite number m of categories, say, 1,...,m with m≥ maxj(kj) and a permutation of subjects' profiles such that each variable in the data set is a simple function of X.
Despite its seeming elegance and appeal for exploratory research, this definition has not been sufficiently studied or applied.
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used.
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes . Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.
In the social sciences, scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities.
Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic. When properly constructed and responsibly administered, questionnaires can provide valuable data about any given subject.
A Likert scale is a psychometric scale commonly involved in research that employs questionnaires. It is the most widely used approach to scaling responses in survey research, such that the term is often used interchangeably with rating scale, although there are other types of rating scales.
Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and is widely criticized by scholars in other disciplines. Other classifications include those by Mosteller and Tukey, and by Chrisman.
Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.
A questionnaire is a research instrument that consists of a set of questions for the purpose of gathering information from respondents through survey or statistical study. A research questionnaire is typically a mix of close-ended questions and open-ended questions. Open-ended, long-term questions offer the respondent the ability to elaborate on their thoughts. The Research questionnaire was developed by the Statistical Society of London in 1838.
Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior. The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.
In statistics, ordered probit is a generalization of the widely used probit analysis to the case of more than two outcomes of an ordinal dependent variable. Similarly, the widely used logit method also has a counterpart ordered logit. Ordered probit, like ordered logit, is a particular method of ordinal regression.
Phrase completion scales are a type of psychometric scale used in questionnaires. Developed in response to the problems associated with Likert scales, Phrase completions are concise, unidimensional measures that tap ordinal level data in a manner that approximates interval level data.
In statistics, scale analysis is a set of methods to analyze survey data, in which responses to questions are combined to measure a latent variable. These items can be dichotomous or polytomous. Any measurement for such data is required to be reliable, valid, and homogeneous with comparable results over different studies.
This article contains a discussion of paradoxes of set theory. As with most mathematical paradoxes, they generally reveal surprising and counter-intuitive mathematical results, rather than actual logical contradictions within modern axiomatic set theory.
Facet theory is a metatheory for the multivariate behavioral sciences that posits that scientific theories and measurements can be advanced by discovering relationships between conceptual classifications of research variables and empirical partitions of data-representation spaces. For this purpose, facet theory proposes procedures for (1) Constructing or selecting variables for observation, using the mapping sentence technique, and (2) Analyzing multivariate data, using data representation spaces, notably those depicting similarity measures, or partially ordered sets, derived from the data.
Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices in order to infer positions of the items on some relevant latent scale. Indeed many alternative models exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers' willingness to pay for quality improvements in multiple dimensions.
The theory of conjoint measurement is a general, formal theory of continuous quantity. It was independently discovered by the French economist Gérard Debreu (1960) and by the American mathematical psychologist R. Duncan Luce and statistician John Tukey.
In database theory, a relation, as originally defined by E. F. Codd, is a set of tuples (d1, d2, ..., dn), where each element dj is a member of Dj, a data domain. Codd's original definition notwithstanding, and contrary to the usual definition in mathematics, there is no ordering to the elements of the tuples of a relation. Instead, each element is termed an attribute value. An attribute is a name paired with a domain. An attribute value is an attribute name paired with an element of that attribute's domain, and a tuple is a set of attribute values in which no two distinct elements have the same name. Thus, in some accounts, a tuple is described as a function, mapping names to values.
The Mokken scale is a psychometric method of data reduction. A Mokken scale is a unidimensional scale that consists of hierarchically-ordered items that measure the same underlying, latent concept. This method is named after the political scientist Rob Mokken who suggested it in 1971.
Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described by S. S. Stevens in 1946. The ordinal scale is distinguished from the nominal scale by having a ranking. It also differs from the interval scale and ratio scale by not having category widths that represent equal increments of the underlying attribute.