Unknowability

Last updated

In philosophy, unknowability is the possibility of inherently unaccessible knowledge. It addresses the epistemology of that which we cannot know. Some related concepts include the halting problem, the limits of knowledge, the unknown unknowns, and chaos theory.

Contents

Nicholas Rescher provides the most recent focused scholarship for this area in Unknowability: An Inquiry into the Limits of Knowledge, [1] where he offered three high level categories, logical unknowability, conceptual unknowability, and in-principle unknowability.

Background

Speculation about what is knowable and unknowable has been part of the philosophical tradition since the inception of philosophy. In particular, Baruch Spinoza's Theory of Attributes [2] argues that a human's finite mind cannot understand infinite substance; accordingly, infinite substance, as it is in itself, is in-principle unknowable to the finite mind.

Immanuel Kant brought focus to unknowability theory in his use of the noumenon concept. He postulated that, while we can know the noumenal exists, it is not itself sensible and must therefore remain unknowable.

Modern inquiry encompasses undecidable problems and questions such as the halting problem, which in their very nature cannot be possibly answered. This area of study has a long and somewhat diffuse history as the challenge arises in many areas of scholarly and practical investigations.

Rescher's categories of unknowability

Rescher organizes unknowability in three major categories:

In-principle unknowability may also be due to a need for more energy and matter than is available in the universe to answer a question, or due to fundamental reasons associated with the quantum nature of matter. In the physics of special and general relativity, the light cone marks the boundary of physically knowable events. [3] [4]

The halting problem

The halting problem – namely, the problem of determining if arbitrary computer programs will ever finish running – is a prominent example of an unknowability associated with the established mathematical field of computability theory. In 1936, Alan Turing proved that the halting problem is undecidable. This means that there is no algorithm that can take as input a program and determine whether it will halt. In 1970, Yuri Matiyasevich proved that the Diophantine problem (closely related to Hilbert's tenth problem) is also undecidable by reducing it to the halting problem. [5] This means that there is no algorithm that can take as input a Diophantine equation and determine whether it has a solution in integers.

The undecidability of the halting problem and the Diophantine problem has a number of implications for mathematics and computer science. For example, it means that there is no general algorithm for proving that a given mathematical statement is true or false. It also means that there is no general algorithm for finding solutions to Diophantine equations.

In principle, many problems can be reduced to the halting problem. See the list of undecidable problems.

Gödel's incompleteness theorems demonstrate the implicit in-principle unknowability of methods to prove consistency and completeness of foundation mathematical systems.

There are various graduations of unknowability associated with frameworks of discussion. For example:

Treatment of knowledge has been wide and diverse. Wikipedia itself is an initiate to capture and record knowledge using contemporary technological tools. Earlier attempts to capture and record knowledge include writing deep tracts on specific topics as well as the use of encyclopedias to organize and summarize entire fields or event the entirety of human knowledge.

Limits of knowledge

An associated topic that comes up frequently is that of Limits of Knowledge.

Examples of scholarly discussions involving limits of knowledge include:

Gregory Chaitin discusses unknowability in many of his works.

Categories of unknowns

Popular discussion of unknowability grew with the use of the phrase There are unknown unknowns by United States Secretary of Defense Donald Rumsfeld at a news briefing on February 12, 2002. In addition to unknown unknowns there are known unknowns and unknown knowns. These category labels appeared in discussion of identification of chemical substances. [10] [11] [12]

Chaos theory

Chaos theory is a theory of dynamics that argues that, for sufficiently complex systems, even if we know initial conditions fairly well, measurement errors and computational limitations render fully correct long-term prediction impossible, hence guaranteeing ultimate unknowability of physical system behaviors.

Related Research Articles

In mathematics and computer science, the Entscheidungsproblem is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. The problem asks for an algorithm that considers, as input, a statement and answers "yes" or "no" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms.

<span class="mw-page-title-main">Gregory Chaitin</span> Argentine-American mathematician

Gregory John Chaitin is an Argentine-American mathematician and computer scientist. Beginning in the late 1960s, Chaitin made contributions to algorithmic information theory and metamathematics, in particular a computer-theoretic result equivalent to Gödel's incompleteness theorem. He is considered to be one of the founders of what is today known as algorithmic complexity together with Andrei Kolmogorov and Ray Solomonoff. Along with the works of e.g. Solomonoff, Kolmogorov, Martin-Löf, and Leonid Levin, algorithmic information theory became a foundational part of theoretical computer science, information theory, and mathematical logic. It is a common subject in several computer science curricula. Besides computer scientists, Chaitin's work draws attention of many philosophers and mathematicians to fundamental problems in mathematical creativity and digital philosophy.

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.

In mathematics, a Diophantine equation is an equation of the form P(x1, ..., xj, y1, ..., yk) = 0 (usually abbreviated P(x, y) = 0) where P(x, y) is a polynomial with integer coefficients, where x1, ..., xj indicate parameters and y1, ..., yk indicate unknowns.

Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation, can decide whether the equation has a solution with all unknowns taking integer values.

<span class="mw-page-title-main">Hilbert's problems</span> 23 mathematical problems stated in 1900

Hilbert's problems are 23 problems in mathematics published by German mathematician David Hilbert in 1900. They were all unsolved at the time, and several proved to be very influential for 20th-century mathematics. Hilbert presented ten of the problems at the Paris conference of the International Congress of Mathematicians, speaking on August 8 at the Sorbonne. The complete list of 23 problems was published later, in English translation in 1902 by Mary Frances Winston Newson in the Bulletin of the American Mathematical Society. Earlier publications appeared in Archiv der Mathematik und Physik.

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

<span class="mw-page-title-main">Julia Robinson</span> American mathematician (1919–1985)

Julia Hall Bowman Robinson was an American mathematician noted for her contributions to the fields of computability theory and computational complexity theory—most notably in decision problems. Her work on Hilbert's tenth problem played a crucial role in its ultimate resolution. Robinson was a 1983 MacArthur Fellow.

<i>Ignoramus et ignorabimus</i> Maxim about limits of scientific knowledge

The Latin maxim ignoramus et ignorabimus, meaning "we do not know and will not know", represents the idea that scientific knowledge is limited. It was popularized by Emil du Bois-Reymond, a German physiologist, in his 1872 address "Über die Grenzen des Naturerkennens".

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

<span class="mw-page-title-main">Fallibilism</span> Philosophical principle

Originally, fallibilism is the philosophical principle that propositions can be accepted even though they cannot be conclusively proven or justified, or that neither knowledge nor belief is certain. The term was coined in the late nineteenth century by the American philosopher Charles Sanders Peirce, as a response to foundationalism. Theorists, following Austrian-British philosopher Karl Popper, may also refer to fallibilism as the notion that knowledge might turn out to be false. Furthermore, fallibilism is said to imply corrigibilism, the principle that propositions are open to revision. Fallibilism is often juxtaposed with infallibilism.

In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.

The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing.

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.

A timeline of mathematical logic; see also history of logic.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.

References

  1. Rescher, Nicholas. Unknowability: an inquiry into the limits of knowledge. Lexington Books, 2009. https://www.worldcat.org/title/298538038
  2. "Spinoza's Theory of Attributes". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2018.
  3. Hilary Putnam, Time and Physical Geometry, The Journal of Philosophy, Vol. 64, No. 8 (Apr. 27, 1967), pp. 240–247 https://www.jstor.org/stable/2024493 https://doi.org/10.2307/2024493
  4. John M. Myers, F. Hadi Madjid, "Logical synchronization: how evidence and hypotheses steer atomic clocks," Proc. SPIE 9123, Quantum Information and Computation XII, 91230T (22 May 2014); https://doi.org/10.1117/12.2054945
  5. Matii︠a︡sevich I︠U︡. V. Hilbert's Tenth Problem. MIT Press 1993.https://www.worldcat.org/title/28424180
  6. Horgan, John. The End of Science : Facing the Limits of Knowledge in the Twilight of the Scientific Age. Addison-Wesley Pub 1996. https://www.worldcat.org/title/34076685
  7. Tavel, Morton. Contemporary Physics and the Limits of Knowledge. Rutgers University Press 2002. https://www.worldcat.org/title/47838409
  8. Cherniak, Christopher. "Limits for knowledge." Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 49.1 (1986): 1–18.https://www.jstor.org/stable/4319805
  9. Hilbert, David (1902). "Mathematical Problems: Lecture Delivered before the International Congress of Mathematicians at Paris in 1900". Bulletin of the American Mathematical Society. 8: 437–79. doi: 10.1090/S0002-9904-1902-00923-3 . MR   1557926.
  10. Little, James L. (2011). "Identification of "known unknowns" utilizing accurate mass data and ChemSpider" (PDF). Journal of the American Society for Mass Spectrometry. 23 (1): 179–185. doi: 10.1007/s13361-011-0265-y . PMID   22069037.
  11. McEachran, Andrew D.; Sobus, Jon R.; Williams, Antony J. (2016). "Identifying known unknowns using the US EPA's CompTox Chemistry Dashboard". Analytical and Bioanalytical Chemistry. 409 (7): 1729–1735. doi:10.1007/s00216-016-0139-z. PMID   27987027. S2CID   31754962.
  12. Schymanski, Emma L.; Williams, Antony J. (2017). "Open Science for Identifying "Known Unknown" Chemicals". Environmental Science and Technology. 51 (10): 5357–5359. Bibcode:2017EnST...51.5357S. doi:10.1021/acs.est.7b01908. PMC   6260822 . PMID   28475325.

Further reading