This article needs additional citations for verification .(May 2023) |
In philosophy, unknowability is the possibility of inherently unaccessible knowledge. It addresses the epistemology of that which they cannot know. Some related concepts include the halting problem, the limits of knowledge, the unknown unknowns, and chaos theory.
Nicholas Rescher provides the most recent focused scholarship for this area in Unknowability: An Inquiry into the Limits of Knowledge, [1] where he offered three high level categories, logical unknowability, conceptual unknowability, and in-principle unknowability.
Speculation about what is knowable and unknowable has been part of the philosophical tradition since the inception of philosophy. In particular, Baruch Spinoza's Theory of Attributes [2] argues that a human's finite mind cannot understand infinite substance; accordingly, infinite substance, as it is in itself, is in-principle unknowable to the finite mind.
Immanuel Kant brought focus to unknowability theory in his use of the noumenon concept. He postulated that, while we can know the noumenal exists, it is not itself sensible and must therefore remain unknowable.
Modern inquiry encompasses undecidable problems and questions such as the halting problem, which in their very nature cannot be possibly answered. This area of study has a long and somewhat diffuse history as the challenge arises in many areas of scholarly and practical investigations.
Rescher organizes unknowability in three major categories:
In-principle unknowability may also be due to a need for more energy and matter than is available in the universe to answer a question, or due to fundamental reasons associated with the quantum nature of matter. In the physics of special and general relativity, the light cone marks the boundary of physically knowable events. [3] [4]
The halting problem – namely, the problem of determining if arbitrary computer programs will ever finish running – is a prominent example of an unknowability associated with the established mathematical field of computability theory. In 1936, Alan Turing proved that the halting problem is undecidable. This means that there is no algorithm that can take as input a program and determine whether it will halt. In 1970, Yuri Matiyasevich proved that the Diophantine problem (closely related to Hilbert's tenth problem) is also undecidable by reducing it to the halting problem. [5] This means that there is no algorithm that can take as input a Diophantine equation and always determine whether it has a solution in integers.
The undecidability of the halting problem and the Diophantine problem has a number of implications for mathematics and computer science. For example, it means that there is no general algorithm for proving that a given mathematical statement is true or false. It also means that there is no general algorithm for finding solutions to Diophantine equations.
In principle, many problems can be reduced to the halting problem. See the list of undecidable problems.
Gödel's incompleteness theorems demonstrate the implicit in-principle unknowability of methods to prove consistency and completeness of foundation mathematical systems.
There are various graduations of unknowability associated with frameworks of discussion. For example:
Treatment of knowledge has been wide and diverse. Wikipedia itself is an initiate to capture and record knowledge using contemporary technological tools. Earlier attempts to capture and record knowledge include writing deep tracts on specific topics as well as the use of encyclopedias to organize and summarize entire fields or event the entirety of human knowledge.
An associated topic that comes up frequently is that of Limits of Knowledge.
Examples of scholarly discussions involving limits of knowledge include:
Gregory Chaitin discusses unknowability in many of his works.
Popular discussion of unknowability grew with the use of the phrase There are unknown unknowns by United States Secretary of Defense Donald Rumsfeld at a news briefing on February 12, 2002. In addition to unknown unknowns there are known unknowns and unknown knowns. These category labels appeared in discussion of identification of chemical substances. [10] [11] [12]
Chaos theory is a theory of dynamics that argues that, for sufficiently complex systems, even if we know initial conditions fairly well, measurement errors and computational limitations render fully correct long-term prediction impossible, hence guaranteeing ultimate unknowability of physical system behaviors.
In the computer science subfield of algorithmic information theory, a Chaitin constant or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
In mathematics and computer science, the Entscheidungsproblem is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. It asks for an algorithm that considers an inputted statement and answers "yes" or "no" according to whether it is universally valid, i.e., valid in every structure. Such an algorithm was proven to be impossible by Alonzo Church and Alan Turing in 1936.
Gregory John Chaitin is an Argentine-American mathematician and computer scientist. Beginning in the late 1960s, Chaitin made contributions to algorithmic information theory and metamathematics, in particular a computer-theoretic result equivalent to Gödel's incompleteness theorem. He is considered to be one of the founders of what is today known as algorithmic complexity together with Andrei Kolmogorov and Ray Solomonoff. Along with the works of e.g. Solomonoff, Kolmogorov, Martin-Löf, and Leonid Levin, algorithmic information theory became a foundational part of theoretical computer science, information theory, and mathematical logic. It is a common subject in several computer science curricula. Besides computer scientists, Chaitin's work draws attention of many philosophers and mathematicians to fundamental problems in mathematical creativity and digital philosophy.
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.
Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
In mathematics, a Diophantine equation is an equation of the form P(x1, ..., xj, y1, ..., yk) = 0 (usually abbreviated P(x, y) = 0) where P(x, y) is a polynomial with integer coefficients, where x1, ..., xj indicate parameters and y1, ..., yk indicate unknowns.
Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation, can decide whether the equation has a solution with all unknowns taking integer values.
Foundations of mathematics are the logical and mathematical framework that allows the development of mathematics without generating self-contradictory theories, and, in particular, to have reliable concepts of theorems, proofs, algorithms, etc. This may also include the philosophical study of the relation of this framework with reality.
Hypercomputation or super-Turing computation is a set of hypothetical models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that could correctly evaluate every statement in Peano arithmetic.
Julia Hall Bowman Robinson was an American mathematician noted for her contributions to the fields of computability theory and computational complexity theory—most notably in decision problems. Her work on Hilbert's tenth problem played a crucial role in its ultimate resolution. Robinson was a 1983 MacArthur Fellow.
The Latin maxim ignoramus et ignorabimus, meaning "we do not know and will not know", represents the idea that scientific knowledge is limited. It was popularized by Emil du Bois-Reymond, a German physiologist, in his 1872 address "Über die Grenzen des Naturerkennens".
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects, such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."
Originally, fallibilism is the philosophical principle that propositions can be accepted even though they cannot be conclusively proven or justified, or that neither knowledge nor belief is certain. The term was coined in the late nineteenth century by the American philosopher Charles Sanders Peirce, as a response to foundationalism. Theorists, following Austrian-British philosopher Karl Popper, may also refer to fallibilism as the notion that knowledge might turn out to be false. Furthermore, fallibilism is said to imply corrigibilism, the principle that propositions are open to revision. Fallibilism is often juxtaposed with infallibilism.
In mathematics, an impossibility theorem is a theorem that demonstrates a problem or general set of problems cannot be solved. These are also known as proofs of impossibility, negative proofs, or negative results. Impossibility theorems often resolve decades or centuries of work spent looking for a solution by proving there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.
Francisco Antônio de Moraes Accioli Dória is a Brazilian mathematician, philosopher, and genealogist. Francisco Antônio Dória received his B.S. in Chemical Engineering from the Federal University of Rio de Janeiro (UFRJ), Brazil, in 1968 and then got his doctorate from the Brazilian Center for Research in Physics (CBPF), advised by Leopoldo Nachbin in 1977. Dória worked for a while at the Physics Institute of UFRJ, and then left to become a Professor of the Foundations of Communications at the School of Communications, also at UFRJ. Dória held visiting positions at the University of Rochester (NY), Stanford University (CA), and the University of São Paulo (USP). His most prolific period spawned from his collaboration with Newton da Costa, a Brazilian logician and one of the founders of paraconsistent logic, which began in 1985. He is currently Professor of Communications, Emeritus, at UFRJ and a member of the Brazilian Academy of Philosophy.
Epistemology or theory of knowledge is the branch of philosophy concerned with the nature and scope (limitations) of knowledge. It addresses the questions "What is knowledge?", "How is knowledge acquired?", "What do people know?", "How do we know what we know?", and "Why do we know what we know?". Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, as well as skepticism about different knowledge claims.
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.
A timeline of mathematical logic; see also history of logic.
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.