Generic views

Last updated

The principal of generic views in the study of cognition stipulates that the interpretation made by an observer of a distal phenomenon should be such as to not require that the observer be in a special position to, or relationship with, the phenomenon in question. The principal is a fairly general account of the inductive bias that allows an observer to reconstruct distal phenomena from an impoverished proximal datum. This principle has been advanced particularly in vision research as an account of how, for example, three-dimensional structure is extracted from an inadequate two-dimensional projection.

Contents

The principal of generic views has been discussed by Richards [1] and Hoffman [2] [n 1] , and has been given a sophisticated Bayesian formalization by Freeman[ citation needed ].

Relation to Bayesian inference

Another expression of the generic views principal is that the inference of distal structure should be such that the inference would remain substantially the same if the "position" of the observer were moderately altered (perturbed). If the inference made would have been qualitatively or categorically different under a perturbation of the observer, then the inference does not satisfy the generic views assumption, and should be rejected. (The question of what constitutes a qualitative or categorical difference is an interesting point of detail.) On this view, it can be argued that the principal of generic views is nothing more than an inference based on the maximum posterior probability (MAP) which accounts for aspects of observation. Thus, we infer the distal phenomenon which possess the highest probability of having generated the observations in question, and this probability incorporates (in addition to relevant priors) both the likelihood of the distal phenomenon generating certain observable signals, and the likelihood of the observer transducing those signals in a manner consistent with the observations. On such an analysis (and with various assumptions invoked), one can obtain a behavior approximating the generic views principal.

Notes

  1. Suppose in reality there’s a resource, like water, and you can quantify how much of it there is in an objective order—very little water, medium amount of water, a lot of water. Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness—in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality. Generically, in the real world, that will never be the case. Something much more natural is a bell curve—say, too little water you die of thirst, but too much water you drown, and only somewhere in between is good for survival. Now the fitness function doesn’t match the structure in the real world. And that’s enough to send truth to extinction. For example, an organism tuned to fitness might see small and large quantities of some resource as, say, red, to indicate low fitness, whereas they might see intermediate quantities as green, to indicate high fitness. Its perceptions will be tuned to fitness, but not to truth. It won’t see any distinction between small and large—it only sees red—even though such a distinction exists in reality.—Donald D. Hoffman to Amanda Gefter

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

Frequentist probability

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. This interpretation supports the statistical needs of many experimental scientists and pollsters. It does not support all needs, however; gamblers typically require estimates of the odds without experiments.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical tendency of something to occur or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

Statistical inference

Statistical inference is the process of using data analysis to deduce properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

Abductive reasoning Form of logical inference which seeks the simplest and most likely explanation

Abductive reasoning is a form of logical inference formulated and advanced by American philosopher Charles Sanders Peirce beginning in the last third of the 19th century. It starts with an observation or set of observations and then seeks to find the simplest and most likely conclusion from the observations. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it. Abductive conclusions are thus qualified as having a remnant of uncertainty or doubt, which is expressed in retreat terms such as "best available" or "most likely". One can understand abductive reasoning as inference to the best explanation, although not all usages of the terms abduction and inference to the best explanation are exactly equivalent.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular premises to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

Inductive reasoning is a method of reasoning in which the premises are viewed as supplying some evidence, but not full assurance, of the truth of the conclusion. It is also described as a method where one's experiences and observations, including what are learned from others, are synthesized to come up with a general truth. Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, although there are many inductive arguments that do not have that form.

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

Subjectivism is the doctrine that "our own mental activity is the only unquestionable fact of our experience", instead of shared or communal, and that there is no external or objective truth.

Gerd Gigerenzer German cognitive psychologist

Gerd Gigerenzer is a German psychologist who has studied the use of bounded rationality and heuristics in decision making. Gigerenzer is director emeritus of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development and director of the Harding Center for Risk Literacy, both in Berlin, Germany.

Mental model Explanation of someones thought process about how something works in the real world

A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems and doing tasks.

The aim of a probabilistic logic is to combine the capacity of probability theory to handle uncertainty with the capacity of deductive logic to exploit structure of formal argument. The result is a richer and more expressive formalism with a broad range of possible application areas. Probabilistic logics attempt to find a natural extension of traditional logic truth tables: the results they define are derived through probabilistic expressions instead. A difficulty with probabilistic logics is that they tend to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as those of Dempster-Shafer theory in evidence-based subjective logic. The need to deal with a broad variety of contexts and issues has led to many different proposals.

The foundations of statistics concern the epistemological debate in statistics over how one should conduct inductive inference from data. Among the issues considered in statistical inference are the question of Bayesian inference versus frequentist inference, the distinction between Fisher's "significance testing" and Neyman–Pearson "hypothesis testing", and whether the likelihood principle should be followed. Some of these issues have been debated for up to 200 years without resolution.

In philosophy, objectivity is the concept of truth independent from individual subjectivity. A proposition is considered to have objective truth when its truth conditions are met without bias caused by a sentient subject. Scientific objectivity refers to the ability to judge without partiality or external influence. Objectivity in the moral framework calls for moral codes to be assessed based on the well-being of the people in the society that follow it. Moral objectivity also calls for moral codes to be compared to one another through a set of universal facts and not through Subjectivity.

Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.

Ideal observer analysis is a method for investigating how information is processed in a perceptual system. It is also a basic principle that guides modern research in perception.

The mind projection fallacy is an informal fallacy first described by physicist and Bayesian philosopher E. T. Jaynes. It occurs when someone thinks that the way they see the world reflects the way the world really is, going as far as assuming the real existence of imagined objects. That is, someone's subjective judgments are "projected" to be inherent properties of an object, rather than being related to personal perception. One consequence is that others may be assumed to share the same perception, or that they are irrational or misinformed if they do not.

Quantum Bayesianism Interpretation of quantum mechanics

In physics and the philosophy of physics, quantum Bayesianism is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it.

Donald D. Hoffman American cognitive psychologist and popular science author

Donald David Hoffman is an American cognitive psychologist and popular science author. He is a professor in the Department of Cognitive Sciences at the University of California, Irvine, with joint appointments in the Department of Philosophy, the Department of Logic and Philosophy of Science, and the School of Computer Science.

Intuitive statistics, or folk statistics, refers to the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.

References

  1. Knill, D. C., & Richards, W., eds., Perception as Bayesian Inference (Cambridge: Cambridge University Press, 1996), p. 478.
  2. Gefter, A., "The Case Against Reality", The Atlantic , Apr. 25, 2016.

Further reading