Computational epistemology

Last updated

Computational epistemology is a subdiscipline of formal epistemology that studies the intrinsic complexity of inductive problems for ideal and computationally bounded agents. In short, computational epistemology is to induction what recursion theory is to deduction.

Contents

Themes

Some of the themes of computational epistemology include:

  1. a set of relevant possibilities (possible worlds), each of which specifies some potentially infinite sequence of inputs to the scientist's method,
  2. a question whose potential answers partition the relevant possibilities (in the set theoretic sense),
  3. a convergent success criterion and
  4. a set of admissible methods

Quotations

Computational epistemology definition:

"Computational epistemology is an interdisciplinary field that concerns itself with the relationships and constraints between reality, measure, data, information, knowledge, and wisdom" (Rugai, 2013)

On making inductive problems easier to solve:

"Eliminating relevant possibilities, weakening the convergence criterion, coarsening the question, or augmenting the collection of potential strategies all tend to make a problem easier to solve" (Kelly, 2000a)

On the divergence of computational epistemology from Bayesian confirmation theory and the like:

"Whenever you are inclined to explain a feature of science in terms of probability and confirmation, take a moment to see how the issue would look in terms of complexity and success"(Kelly, 2000a)

Computational epistemology in a nutshell:

Formal learning theory is very simple in outline. An inductive problem specifies a range of epistemically possible worlds over which to succeed and determines what sort of output would be correct, where correctness may embody both content and truth (or some analogous virtue like empirical adequacy). Each possible world produces an input stream which the inductive method processes sequentially, generating its own output stream, which may terminate (ending with a mark indicating this fact) or go on forever. A notion of success specifies how the method should converge to a correct output in each possible world. A method solves the problem (in a given sense) just in case the method succeeds (in the appropriate sense) in each of the possible worlds specified by the problem. We say that such a method is reliable since it succeeds over all the epistemically possible worlds. Of two non-solutions, one is as reliable as the other just in case it succeeds in all the worlds the other one succeeds in. That's all there is to it! (Kelly et al. 1997)

On the proper role of methodology:

"It is for empirical science to investigate the details of the mechanisms whereby we track, and for methodologists to devise and refine even better (inferential) mechanisms and methods" (Nozick, 1981)

See also

Related Research Articles

Epistemology Branch of philosophy concerned with the nature and scope of knowledge

Epistemology is the branch of philosophy concerned with the theory of knowledge.

In philosophy, empiricism is a theory that states that knowledge comes only or primarily from sensory experience. It is one of several views of epistemology, along with rationalism and skepticism. Empiricism emphasises the role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. However, empiricists may argue that traditions arise due to relations of previous sense experiences.

Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, was a movement in Western philosophy whose central thesis was the verification principle. Also called verificationism, this would-be theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.

Willard Van Orman Quine American philosopher and logician

Willard Van Orman Quine was an American philosopher and logician in the analytic tradition, recognized as "one of the most influential philosophers of the twentieth century." From 1930 until his death 70 years later, Quine was continually affiliated with Harvard University in one way or another, first as a student, then as a professor of philosophy and a teacher of logic and set theory, and finally as a professor emeritus who published or revised several books in retirement. He filled the Edgar Pierce Chair of Philosophy at Harvard from 1956 to 1978. A 2009 poll conducted among analytic philosophers named Quine as the fifth most important philosopher of the past two centuries. He won the first Schock Prize in Logic and Philosophy in 1993 for "his systematical and penetrating discussions of how learning of language and communication are based on socially available evidence and of the consequences of this for theories on knowledge and linguistic meaning." In 1996 he was awarded the Kyoto Prize in Arts and Philosophy for his "outstanding contributions to the progress of philosophy in the 20th century by proposing numerous theories based on keen insights in logic, epistemology, philosophy of science and philosophy of language."

Philosophy of science The philosophical study of the assumptions, foundations, and implications of science

Philosophy of science is a sub-field of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysics, ontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than philosophy of science.

The Transcendental Argument for the Existence of God (TAG) is the argument that attempts to prove the existence of God by arguing that logic, morals, and science ultimately presuppose a supreme being and that God must therefore be the source of logic and morals.

Rudolf Carnap German philosopher

Rudolf Carnap was a German-language philosopher who was active in Europe before 1935 and in the United States thereafter. He was a major member of the Vienna Circle and an advocate of logical positivism. He is considered "one of the giants among twentieth-century philosophers."

Nyāya, literally means "rules", "method" or "judgment". It is also the name of one of the six orthodox (astika) schools of Hinduism. This school's most significant contributions to Indian philosophy was systematic development of the theory of logic, methodology, and its treatises on epistemology.

The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense, highlighting the apparent lack of justification for:

  1. Generalizing about the properties of a class of objects based on some number of observations of particular instances of that class or
  2. Presupposing that a sequence of events in the future will occur as it always has in the past. Hume called this the principle of uniformity of nature.

Knowledge is a familiarity, awareness, or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education by perceiving, discovering, or learning.

Algorithmic learning theory is a mathematical framework for analyzing machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches of computational learning theory.

Inductive reasoning is a method of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion; this is in contrast to deductive reasoning. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given. Many dictionaries define inductive reasoning as the derivation of general principles from specific observations, though there are many inductive arguments that do not have that form.

"Natural kind" is a label to which scholars have assigned incompatible meanings. Some treat it as a classification identifying some structure of truth and reality that exists whether or not humans recognize it. Others treat it as a human classification of knowings that work instrumentally.

Nelson Goodman American philosopher

Henry Nelson Goodman was an American philosopher, known for his work on counterfactuals, mereology, the problem of induction, irrealism, and aesthetics.

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

Outline of epistemology Overview of and topical guide to epistemology

The following outline is provided as an overview of and topical guide to epistemology:

Inductivism is the traditional model of scientific method attributed to Francis Bacon, who in 1620 vowed to subvert allegedly traditional thinking. In the Baconian model, one observes nature, proposes a modest law to generalize an observed pattern, confirms it by many observations, ventures a modestly broader law, and confirms that, too, by many more observations, while discarding disconfirmed laws. The laws grow ever broader but never much exceed careful, extensive observation. Thus, freed from preconceptions, scientists gradually uncover nature's causal and material structure.

Evidence of absence is evidence of any kind that suggests something is missing or that it does not exist.

Clark N. Glymour is the Alumni University Professor Emeritus in the Department of Philosophy at Carnegie Mellon University. He is also a senior research scientist at the Florida Institute for Human and Machine Cognition.

Logic Study of inference and truth

Logic is the systematic study of the forms of inference, the relations that lead to the acceptance of one proposition, the conclusion, on the basis of a set of other propositions, the premises. More broadly, logic is the analysis and appraisal of arguments. The premises may or may not support the conclusion; when they do not, the relation is characterized as a fallacy.

References