Predispositioning theory, in the field of decision theory and systems theory, is a theory focusing on the stages between a complete order and a complete disorder.
Predispositioning theory was founded by Aron Katsenelinboigen (1927–2005), a professor in the Wharton School who dealt with indeterministic systems such as chess, business, economics, and other fields of knowledge and also made an essential step forward in elaboration of styles and methods of decision-making.
Predispositioning theory is focused on the intermediate stage between a complete order and a complete disorder. According to Katsenelinboigen, the system develops gradually, going through several stages, starting with incomplete and inconsistent linkages between its elements and ending with complete and consistent ones.
"Mess. The zero phase can be called a mess because it contains no linkages between the system's elements. Such a definition of mess as ‘a disorderly, un-tidy, or dirty state of things’ we find in Webster's New World Dictionary. (...)
Chaos. Mess should not be confused with the next phase, chaos, as this term is understood today. Arguably, chaos is the first phase of indeterminism that displays sufficient order to talk of the general problem of system development. The chaos phase is characterized by some ordering of accumulated statistical data and the emergence of the basic rules of interactions of inputs and outputs (not counting boundary conditions). Even such a seemingly limited ordering makes it possible to fix systemic regularities of the sort shown by Feigenbaum numbers and strange attractors.
(...) Different types of orderings in the chaos phase may be brought together under the notion of directing, for they point to a possible general direction of system development and even its extreme states. But even if a general path is known, enormous difficulties remain in linking algorithmically the present state with the final one and in operationalizing the algorithms. These objectives are realized in the next two large phases that I call predispositioning and programming. (...) Programming. When linkages between states are established through reactive procedures, either by table functions or analytically, it is often assumed that each state is represented only by essentials. For instance, the production function in economics ties together inputs and outputs in physical terms. When a system is represented as an equilibrium or an optimization model, the original and conjugated parameters are stated explicitly; in economics, they are products (resources) and prices, respectively.9 Deterministic economic models have been extensively formalized; they assume full knowledge of inputs, outputs, and existing technologies. (...) Predispositioning (...) exhibits less complete linkages between system's elements than programming but more complete than chaos." [1] : 19–20
Methods like programming and randomness are well-known and developed while the methodology for the intermediate stages lying between complete chaos and complete order as well as their philosophical conceptualization have never been discussed explicitly and no methods of their measurements were elaborated. According to Katsenelinboigen, operative sub-methods of dealing with the system are programming, predispositioning, and randomness. They correspond to three stages of systems development. Programming is a formation of complete and consistent linkages between all the stages of the systems' development. Predispositioning is a formation of semi-efficient linkages between the stages of the system's development. In other words, predispositioning is a method responsible for creation of a predisposition.
Randomness is a formation of inconsistent linkages between the stages of the system's development. In this context, for instance, Darwinism emphasizes the exclusive role of chance occurrences in the system's development since it gives top priority to randomness as a method. Conversely, creationism states that the system develops in a comprehensive fashion, i.e. that programming is the only method involved in the development of the system. As Aron Katsenelinboigen notices, both schools neglect the fact that the process of the system's development includes a variety of methods which govern different stages, depending on the systems’ goals and conditions.
Unfortunately, predispositioning as a method as well as a predisposition as an intermediate stage have never been discussed by scholars, though there were some interesting intuitive attempts to deal with the formation of a predisposition. The game of chess, at this point, was one of the most productive fields in the study of predispositioning as a method. Owing to chess’ focus on the positional style, it elaborated a host of innovative strategies and tactics that Katsenelinboigen analyzed and systematized and made them a basis for his theory.
To sum up, the main focus of predispositioning theory is on the intermediate stage of systems development, the stage that Katsenelinboigen proposed to call a "predisposition". This stage is distinguished by semi-complete and semi-consistent linkages between its elements. The most vital question when dealing with semi-complete and semi-consistent stages of the system is the question of its evaluation. To this end, Katsenelinboigen elaborated his structure of values, using the game of chess as a model.
According to Katsenelinboigen's predispositioning theory, in the chess game pieces are evaluated from two basic points of view – their weight in a given position on the chessboard and their weight independent to any particular position. Based on the degree of conditionality, the values are:
According to Katsenelinboigen, game pieces in chess are evaluated from two basic points of view: their weight with regard to a certain situation on the chessboard and their weight without regard to any particular situation, only to the position of the pieces. The latter are defined by Katsenelinboigen as semi-unconditional values, formed by the sole condition of the rules of piece interaction. The semiunconditional values of the pieces (such as queen 9, rook 5, bishop 3, knight 3, and pawn 1) appear as a result of the rules of interaction of a piece with the opponent's king. All other conditions, such as starting conditions, final goal, and a program that links the initial condition to the final state, are not taken into account. The degree of conditionality is increased by applying preconditions, and the presence of all four preconditions fully forms conditional values.
Katsenelinboigen outlines two extreme cases of the spectrum of values—fully conditional and fully unconditional—and says that, in actuality, they are ineffectual in evaluating the material and so are sometimes replaced by semiconditional or semiunconditional valuations, which are distinguished by their differing degrees of conditionality. He defines fully conditional values as those based on complete and consistent linkages among all four preconditions." [2] : 144–145
The conditional values are formed by the four basic conditions:
The degree of unconditionality is predicated by the necessity to evaluate things under uncertainty (when the future is unknown) and conditions cannot be specified.
Applying his concept of values to social systems, Katsenelinboigen shows how the degree of unconditionality forms morality and law. According to him, the moral values represented in the Torah as the Ten Commandments are analogous to semi-unconditional values in a chess game, for they are based exclusively on the rules of interactions.
"The difference between these two approaches is clearly manifested in the various translations of the Torah. For instance, The Holy Scriptures (1955), a new translation based on the masoretic text (a vast body of the textual criticism of the Hebrew Bible), translates the commandment as ‘Thou shalt not commit murder.’ In The Holy Bible, commonly known as the authorized (King James) version (The Gideons International, 1983), this commandment is translated as ‘Thou shalt not kill.’ (...) The difference between unconditional and semi-unconditional evaluations will become more prominent if we use the same example of ‘Thou shalt not kill and ‘Thou shalt not murder’ to illustrate the conduct of man in accordance with his precepts. In an extreme case, one who follows ‘Thou shalt not kill’ will allow himself to be killed before he kills another. These views are held by one of the Hindu sects in Sri Lanka (the former Ceylon). To the best of my knowledge, the former prime minister of Ceylon, Solomon Bandaranaike (1899-1959), belonged to this sect. He did not allow himself to kill an attacker and was murdered. As he lay bleeding to death, he did crawl over to the murderer and knock the pistol from his hand before it could be used against his wife, Sirimavo Bandaranaike. She later became the prime minister of Ceylon-Sri Lanka." [1] : 135–36
But how does one ascribe weights to certain parameters, establishes the degree of conditionality, etc.? How does the evaluative process go in indeterministic systems?
Katsenelinboigen states that the evaluative category for indeterministic systems is based on subjectivity. "This pioneering approach to the evaluative process is the subject of Katsenelinboigen’s work on indeterministic systems. The roots of one’s subjective evaluation lie in the fact that the executor cannot be separated from the evaluator, who evaluates the system in accordance with his or her own particular ability to develop it. This can be observed in chess, in which the same position is evaluated differently by different chess players, or in literature with regard to hermeneutics." [2] : 36
Katsenelinboigen writes:
Katsenelinboigen clearly explains why subjectivity of the managerial decision is inevitable:
To sum up, subjectivity becomes an important factor in evaluating a predisposition. The roots of one's subjective evaluation lie in the fact that the executor cannot be separated from the evaluator who evaluates the system in accordance with his own particular ability to develop it.
The structure of values plays an essential part in calculus of predisposition.
Calculus of predispositions, a basic part of predispositioning theory, belongs to the indeterministic procedures. "The key component of any indeterministic procedure is the evaluation of a position. Since it is impossible to devise a deterministic chain linking the inter-mediate state with the outcome of the game, the most complex component of any indeterministic method is assessing these intermediate stages. It is precisely the function of predispositions to assess the impact of an intermediate state upon the future course of development." [1] : 33 According to Katsenelinboigen, calculus of predispositions is another method of computing probability. Both methods may lead to the same results and, thus, can be interchangeable. However, it is not always possible to interchange them since computing via frequencies requires availability of statistics, possibility to gather the data as well as having the knowledge of the extent to which one can interlink the system's constituent elements. Also, no statistics can be obtained on unique events and, naturally, in such cases the calculus of predispositions becomes the only option. The procedure of calculating predispositions is linked to two steps – dissection of the system on its constituent elements and integration of the analyzed parts in a new whole. According to Katsenelinboigen, the system is structured by two basic types of parameters – material and positional. The material parameters constitute the skeleton of the system. Relationships between them form positional parameters. The calculus of predispositions primarily deals with
"In order to quantify the evaluation of a position we need new techniques, which I have grouped under the heading of calculus of predispositions. This calculus is based on a weight function, which represents a variation on the well-known criterion of optimality for local extremum. This criterion incorporates material parameters and their conditional valuations. The following key elements distinguish the modified weight function from the criterion of optimality:
To conclude, there are some basic differences between frequency-based and predispositions-based methods of computing probability.
According to Katsenelinboigen, the two methods of computing probability may complement each other if, for instance, they are applied to a multilevel system with an increasing complexity of its composition at higher levels.
In mathematics, an equation is a formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.
A parameter, generally, is any characteristic that can help in defining or classifying a particular system. That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.
Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.
The unary numeral system is the simplest numeral system to represent natural numbers: to represent a number N, a symbol representing 1 is repeated N times.
In ethics and the social sciences, value theory involves various approaches that examine how, why, and to what degree humans value things and whether the object or subject of valuing is a person, idea, object, or anything else. Within philosophy, it is also known as ethics or axiology.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Abductive reasoning is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by American philosopher Charles Sanders Peirce beginning in the last third of the 19th century.
An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position in a game tree. Most of the time, the value is either a real number or a quantized integer, often in nths of the value of a playing piece such as a stone in go or a pawn in chess, where n may be tenths, hundredths or other convenient fraction, but sometimes, the value is an array of three values in the unit interval, representing the win, draw, and loss percentages of the position.
A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action.
Record linkage is the task of finding records in a data set that refer to the same entity across different data sources. Record linkage is necessary when joining different data sets based on entities that may or may not share a common identifier, which may be due to differences in record shape, storage location, or curator style or preference. A data set that has undergone RL-oriented reconciliation may be referred to as being cross-linked.
In moral philosophy, instrumental and intrinsic value are the distinction between what is a means to an end and what is as an end in itself. Things are deemed to have instrumental value if they help one achieve a particular end; intrinsic values, by contrast, are understood to be desirable in and of themselves. A tool or appliance, such as a hammer or washing machine, has instrumental value because it helps you pound in a nail or cleans your clothes. Happiness and pleasure are typically considered to have intrinsic value insofar as asking why someone would want them makes little sense: they are desirable for their own sake irrespective of their possible instrumental value. The classic names instrumental and intrinsic were coined by sociologist Max Weber, who spent years studying good meanings people assigned to their actions and beliefs.
"Instrumental" and "value-rational action" are terms scholars use to identify two kinds of behavior that humans can engage in. Scholars call using means that "work" as tools, instrumental action, and pursuing ends that are "right" as legitimate ends, value-rational action.
Aron Iosifovich Katsenelinboigen was a founder of predispositioning theory, a subject in decision theory and systems theory that models development in the context of uncertainty.
Calculus of predispositions is a basic part of predispositioning theory and belongs to the indeterministic procedures.
Quantitative analysis is the use of mathematical and statistical methods in finance and investment management. Those working in the field are quantitative analysts (quants). Quants tend to specialize in specific areas which may include derivative structuring or pricing, risk management, investment management and other related finance occupations. The occupation is similar to those in industrial mathematics in other industries. The process usually consists of searching vast databases for patterns, such as correlations among liquid assets or price-movement patterns.
The Datar–Mathews Method is a method for real options valuation. The method provides an easy way to determine the real option value of a project simply by using the average of positive outcomes for the project. The method can be understood as an extension of the net present value (NPV) multi-scenario Monte Carlo model with an adjustment for risk aversion and economic decision-making. The method uses information that arises naturally in a standard discounted cash flow (DCF), or NPV, project financial valuation. It was created in 2000 by Vinay Datar, professor at Seattle University; and Scott H. Mathews, Technical Fellow at The Boeing Company.
In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data.
Rohan L. Fernando is a Sri Lankan American geneticist who is a professor of quantitative genetics in the Department of Animal Science at Iowa State University (ISU), US. Fernando's efforts have focused primarily on theory and methods for use of genetic markers in breeding, theory and methods for genetic evaluations of crossbred animals, methodology related to the estimation of genetic parameters and the prediction of genetic merit in populations undergoing selection and non-random mating, Bayesian methodology for analysis of unbalanced mixed model data, optimization of breeding programs, and use of computer simulation to study dynamics of genetic system.