Formal epistemology

Last updated

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

Decision theory is the study of the reasoning underlying an agent's choices against nature. Decision theory is where results depends on another and can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.

Logic the systematic study of the form of arguments

Logic is the systematic study of the form of valid inference, and the most general laws of truth. A valid inference is one where there is a specific relation of logical support between the assumptions of the inference and its conclusion. In ordinary discourse, inferences may be signified by words such as therefore, hence, ergo, and so on.

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.

History

Though formally oriented epistemologists have been laboring since the emergence of formal logic and probability theory (if not earlier), only recently have they been organized under a common disciplinary title. This gain in popularity may be attributed to the organization of yearly Formal Epistemology Workshops by Branden Fitelson and Sahotra Sarkar, starting in 2004, and the PHILOG-conferences starting in 2002 (The Network for Philosophical Logic and Its Applications) organized by Vincent F. Hendricks. Carnegie Mellon University's Philosophy Department hosts an annual summer school in logic and formal epistemology. In 2010, the department founded the Center for Formal Epistemology.

Branden Fitelson is an American philosopher and Distinguished Professor of Philosophy at Northeastern University. He is known for his expertise on formal epistemology and philosophy of science.

Sahotra Sarkar is a philosopher of science, at the University of Texas at Austin.

Vincent Fella Rune Møller Hendricks, is a Danish philosopher and logician. He holds two doctoral degrees in philosophy and is Professor of Formal Philosophy and Director of the Center for Information and Bubble Studies (CIBS) at University of Copenhagen, Denmark. He was previously Professor of Formal Philosophy at Roskilde University, Denmark. He is member of IIP, the Institut International de Philosophie.

Topics

Some of the topics that come under the heading of formal epistemology include:

Belief revision is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.

Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

Algorithmic learning theory is a mathematical framework for analyzing machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches of computational learning theory.

List of contemporary formal epistemologists

Luc Bovens is a Belgian professor of philosophy at the University of North Carolina at Chapel Hill. Bovens is a former editor of Economics and Philosophy. His main areas of research are moral and political philosophy, philosophy of economics, philosophy of public policy, Bayesian epistemology, rational choice theory, and voting theory. He has also published work, of some controversy to the anti-abortion movement, on issues regarding abortion and natural family planning methods of contraception.

Joseph Halpern American computer scientist

Joseph Yehuda Halpern is a professor of computer science at Cornell University. Most of his research is on reasoning about knowledge and uncertainty.

Sven Ove Hansson is a professor of philosophy and chair of the Department of Philosophy and History of Technology at the Royal Institute of Technology (KTH) in Stockholm, Sweden. He is an author and scientific skeptic, with a special interest in environmental risk assessment, as well as in decision theory and belief revision.

See also

Computability theory, also known as recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory.

Computational learning theory Theory of machine learning

In computer science, computational learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular premises to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, distinguishing abduction from induction, where abduction is inference to the best explanation.

Inductive reasoning is a method of reasoning in which the premises are viewed as supplying some evidence for the truth of the conclusion, this is in contrast to deductive reasoning. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given.

Ray Solomonoff's theory of universal inductive inference is a theory of prediction based on logical observations, such as predicting the next symbol based upon a given series of symbols. The only assumption that the theory makes is that the environment follows some unknown but computable probability distribution. It is a mathematical formalization of Occam's razor and the Principle of Multiple Explanations.

Computational epistemology is a subdiscipline of formal epistemology that studies the intrinsic complexity of inductive problems for ideal and computationally bounded agents. In short, computational epistemology is to induction what recursion theory is to deduction.

In logic, defeasible reasoning is a kind of reasoning that is rationally compelling, though not deductively valid.

The aim of a probabilistic logic is to combine the capacity of probability theory to handle uncertainty with the capacity of deductive logic to exploit structure of formal argument. The result is a richer and more expressive formalism with a broad range of possible application areas. Probabilistic logics attempt to find a natural extension of traditional logic truth tables: the results they define are derived through probabilistic expressions instead. A difficulty with probabilistic logics is that they tend to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as those of Dempster-Shafer theory in evidence-based subjective logic. The need to deal with a broad variety of contexts and issues has led to many different proposals.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

Outline of epistemology Overview of and topical guide to epistemology

The following outline is provided as an overview of and topical guide to epistemology:

Buddhist logico-epistemology

Buddhist logico-epistemology is a term used in Western scholarship for pramāṇa-vāda and Hetu-vidya. Pramāṇa-vāda is an epistemological study of the nature of knowledge; Hetu-vidya is a system of logic. These models developed in India during the 5th through 7th centuries.

Epistemology or theory of knowledge is the branch of philosophy concerned with the nature and scope (limitations) of knowledge. It addresses the questions "What is knowledge?", "How is knowledge acquired?", "What do people know?", "How do we know what we know?", and "Why do we know what we know?". Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, as well as skepticism about different knowledge claims.

Gregory Wheeler American philosopher

Gregory Wheeler is an American logician, philosopher, and computer scientist, who specializes in formal epistemology. Much of his work has focused on imprecise probability. He is currently Professor of Philosophy and Computer Science at the Frankfurt School of Finance and Management, and has held positions at LMU Munich, Carnegie Mellon University, the Max Planck Institute for Human Development in Berlin, and the New University of Lisbon. He is a member of the PROGIC steering committee, the editorial boards of Synthese, and Minds and Machines, and was the editor-in-chief of Minds and Machines from 2011 to 2016. He obtained a Ph.D. in philosophy and computer science from the University of Rochester under Henry Kyburg.

The psychology of reasoning is the study of how people reason, often broadly defined as the process of drawing conclusions to inform how people solve problems and make decisions. It overlaps with psychology, philosophy, linguistics, cognitive science, artificial intelligence, logic, and probability theory.

John L. Pollock (1940–2009) was an American philosopher known for influential work in epistemology, philosophical logic, cognitive science, and artificial intelligence.

Clark N. Glymour is the Alumni University Professor in the Department of Philosophy at Carnegie Mellon University. He is also a senior research scientist at the Florida Institute for Human and Machine Cognition.

Wolfgang Konrad Spohn is a German philosopher. He is professor of philosophy and philosophy of science at the University of Konstanz.

Timothy Joel McGrew is Professor of Philosophy, and Chair of the Department of Philosophy at Western Michigan University. His research interests include Epistemology, the History and Philosophy of Science, and Philosophy of Religion. He is a specialist in the philosophical applications of probability theory.

References