Commensurability in economics arises whenever there is a common measure through which the value of two entities can be compared.
Commensurability has two versions:
In mathematics, cardinal numbers, or cardinals for short, are a generalization of the natural numbers used to measure the cardinality (size) of sets. The cardinality of a finite set is a natural number: the number of elements in the set. The transfinite cardinal numbers describe the sizes of infinite sets.
In set theory, an ordinal number, or ordinal, is one generalization of the concept of a natural number that is used to describe a way to arrange a collection of objects in order, one after another. Any finite collection of objects can be put in order just by the process of counting: labeling the objects with distinct natural numbers. Ordinal numbers are thus the "labels" needed to arrange collections of objects in order.
While weak commensurability is a form of strong comparability, it is distinct from weak comparability, where the fact that a comparison is valid in one context does not imply that it is so in all contexts. Also issues of comparability are different from indeterminacy: it may not be possible in certain circumstances to make a measurement, even though if such data was available it would be valid to compare measurements. [1]
Commensurability is a key factor in the socialist calculation debate.
The socialist calculation debate was a discourse on the subject of how a socialist economy would perform economic calculation given the absence of the law of value, money, financial prices for capital goods, and private ownership of the means of production. More specifically, the debate was centered on the application of economic planning for the allocation of the means of production as a substitute for capital markets, and whether or not such an arrangement would be superior to capitalism in terms of efficiency and productivity.
Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. It makes a close link between model theory that deals with what is true in different models, and proof theory that studies what can be formally proven in particular formal systems.
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of measurement are dependent on the context and discipline. In the natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales.
In quantum mechanics, counterfactual definiteness (CFD) is the ability to speak "meaningfully" of the definiteness of the results of measurements that have not been performed. The term "counterfactual definiteness" is used in discussions of physics calculations, especially those related to the phenomenon called quantum entanglement and those related to the Bell inequalities. In such discussions "meaningfully" means the ability to treat these unmeasured results on an equal footing with measured results in statistical calculations. It is this aspect of counterfactual definiteness that is of direct relevance to physics and mathematical models of physical systems and not philosophical concerns regarding the meaning of unmeasured results.
Precision is a description of random errors, a measure of statistical variability.
Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, psychology, sociology, engineering, metrology, meteorology, ecology and information science.
Purchasing power parity (PPP) is a way of measuring economic variables in different countries so that irrelevant exchange rate variations do not distort comparisons. Purchasing power exchange rates are such that it would cost exactly the same number of, for example, US dollars to buy euros and then buy a basket of goods in the market as it would cost to purchase the same goods directly with dollars. The purchasing power exchange rate used in this conversion equals the ratio of the currencies' respective purchasing powers.
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, then the ratio of oranges to lemons is eight to six. Similarly, the ratio of lemons to oranges is 6:8 and the ratio of oranges to the total amount of fruit is 8:14.
Reliability in statistics and psychometrics is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions. "It is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are highly reliable are accurate, reproducible, and consistent from one testing occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are usually used to indicate the amount of error in the scores." For example, measurements of people's height and weight are often extremely reliable.
In the social sciences, scaling is the process of measuring or ordering entities with respect to quantitative attributes or traits. For example, a scaling technique might involve estimating individuals' levels of extraversion, or the perceived quality of products. Certain methods of scaling permit estimation of magnitudes on a continuum, while other methods provide only for relative ordering of the entities.
Experimental psychology refers to work done by those who apply experimental methods to psychological study and the processes that underlie it. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation & perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.
Income inequality metrics or income distribution metrics are used by social scientists to measure the distribution of income and economic inequality among the participants in a particular economy, such as that of a specific country or of the world in general. While different theories may try to explain how income inequality comes about, income inequality metrics simply provide a system of measurement used to determine the dispersion of incomes. The concept of inequality is distinct from poverty and fairness.
Quantity is a property that can exist as a multitude or magnitude. Quantities can be compared in terms of "more", "less", or "equal", or by assigning a numerical value in terms of a unit of measurement. Quantity is among the basic classes of things along with quality, substance, change, and relation. Some quantities are such by their inner nature, while others are functioning as states of things such as heavy and light, long and short, broad and narrow, small and great, or much and little.
Commensurability is a concept in the philosophy of science whereby scientific theories are commensurable if scientists can discuss them using a shared nomenclature that allows direct comparison of theories to determine which theory is more valid or useful. On the other hand, theories are incommensurable if they are embedded in starkly contrasting conceptual frameworks whose languages do not overlap sufficiently to permit scientists to directly compare the theories or to cite empirical evidence favoring one theory over the other. Discussed by Ludwik Fleck in the 1930s, and popularized by Thomas Kuhn in the 1960s, the problem of incommensurability results in scientists talking past each other, as it were, while comparison of theories is muddled by confusions about terms, contexts and consequences.
Observational equivalence is the property of two or more underlying entities being indistinguishable on the basis of their observable implications. Thus, for example, two scientific theories are observationally equivalent if all of their empirically testable predictions are identical, in which case empirical evidence cannot be used to distinguish which is closer to being correct; indeed, it may be that they are actually two different perspectives on one underlying theory.
In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations whose category membership is known. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient. Classification is an example of pattern recognition.
Two concepts or things are commensurable if they are measurable or comparable by a common standard.
In quantum mechanics, the Kochen–Specker (KS) theorem, also known as the Bell–Kochen–Specker theorem, is a "no-go" theorem proved by John S. Bell in 1966 and by Simon B. Kochen and Ernst Specker in 1967. It places certain constraints on the permissible types of hidden-variable theories, which try to explain the predictions of quantum mechanics in a context-independent way. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors. The theorem is a complement to Bell's theorem.
The rational planning model is a model of the planning process involving a number of rational actions or steps. Taylor (1998) outlines five steps, as follows:
The SCOP formalism or State Context Property formalism is an abstract mathematical formalism for describing states of a system that generalizes both quantum and classical descriptions. The formalism describes entities, which may exist in different states, which in turn have various properties. In addition there is a set of "contexts" by which an entity may be observed. The formalism has primarily found use outside of physics as a theory of concepts, in particular in the field of quantum cognition, which develops quantum-like models of cognitive phenomena that may seem paradoxical or irrational when viewed from a perspective of classical states and logic.
In computer programming, programming languages are often colloquially classified as to whether the language's type system makes it strongly typed or weakly typed. Generally, a strongly typed language has stricter typing rules at compile time, which implies that errors and exceptions are more likely to happen during compilation. Most of these rules affect variable assignment, return values and function calling. On the other hand, a weakly typed language has looser typing rules and may produce unpredictable results or may perform implicit type conversion at runtime. A different but related concept is latent typing.