Mechanism is the belief that natural wholes (principally living things) are like complicated machines or artefacts, composed of parts lacking any intrinsic relationship to each other.
The doctrine of mechanism in philosophy comes in two different flavors. They are both doctrines of metaphysics, but they are different in scope and ambitions: the first is a global doctrine about nature; the second is a local doctrine about humans and their minds, which is hotly contested. For clarity, we might distinguish these two doctrines as universal mechanism and anthropic mechanism.
Philosophy is the study of general and fundamental questions about existence, knowledge, values, reason, mind, and language. Such questions are often posed as problems to be studied or resolved. The term was probably coined by Pythagoras. Philosophical methods include questioning, critical discussion, rational argument, and systematic presentation. Classic philosophical questions include: Is it possible to know anything and to prove it? What is most real? Philosophers also pose more practical and concrete questions such as: Is there a best way to live? Is it better to be just or unjust? Do humans have free will?
Metaphysics is the branch of philosophy that examines the fundamental nature of reality, including the relationship between mind and matter, between substance and attribute, and between potentiality and actuality. The word "metaphysics" comes from two Greek words that, together, literally mean "after or behind or among [the study of] the natural". It has been suggested that the term might have been coined by a first century CE editor who assembled various small selections of Aristotle’s works into the treatise we now know by the name Metaphysics.
There is no constant meaning in the history of philosophy for the word Mechanism. Originally, the term meant that cosmological theory which ascribes the motion and changes of the world to some external force. In this view material things are purely passive, while according to the opposite theory (i. e., Dynamism), they possess certain internal sources of energy which account for the activity of each and for its influence on the course of events; These meanings, however, soon underwent modification. The question as to whether motion is an inherent property of bodies, or has been communicated to them by some external agency, was very often ignored. With a large number of cosmologists the essential feature of Mechanism is the attempt to reduce all the qualities and activities of bodies to quantitative realities, i. e. to mass and motion. But a further modification soon followed. Living bodies, as is well known, present at first sight certain characteristic properties which have no counterpart in lifeless matter. Mechanism aims to go beyond these appearances. It seeks to explain all "vital" phenomena as physical and chemical facts; whether or not these facts are in turn reducible to mass and motion becomes a secondary question, although Mechanists are generally inclined to favour such reduction. The theory opposed to this biological mechanism is no longer Dynamism, but Vitalism or Neo-vitalism, which maintains that vital activities cannot be explained, and never will be explained, by the laws which govern lifeless matter.— "Mechanism" in Catholic Encyclopedia (1913)
The older doctrine, here called universal mechanism, is the ancient philosophies closely linked with materialism and reductionism, especially that of the atomists and to a large extent, stoic physics. They held that the universe is reducible to completely mechanical principles—that is, the motion and collision of matter. Later mechanists believed the achievements of the scientific revolution had shown that all phenomena could eventually be explained in terms of 'mechanical' laws, natural laws governing the motion and collision of matter that implied a thorough going determinism: if all phenomena could be explained entirely through the motion of matter under the laws of classical physics, then even more surely than the gears of a clock determine that it must strike 2:00 an hour after striking 1:00, all phenomena must be completely determined: whether past, present or future. (One of the philosophical implications of modern quantum mechanics is that this view of determinism is not defensible.)
Materialism is a form of philosophical monism which holds that matter is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions. According to philosophical materialism, mind and consciousness are by-products or epiphenomena of material processes without which they cannot exist. This concept directly contrasts with idealism, where mind and consciousness are first-order realities to which matter is subject and material interactions are secondary.
Reductionism is any of several related philosophical ideas regarding the associations between phenomena which can be described in terms of other simpler or more fundamental phenomena.
Atomism is a natural philosophy that developed in several ancient traditions.
The French mechanist and determinist Pierre Simon de Laplace formulated the sweeping implications of this thesis by saying:
We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.— Pierre Simon Laplace, A Philosophical Essay on Probabilities
One of the first and most famous expositions of universal mechanism is found in the opening passages of Leviathan by Thomas Hobbes (1651). What is less frequently appreciated is that René Descartes was a staunch mechanist, though today, in the philosophy of mind, he is remembered for introducing the mind–body problem in terms of dualism and physicalism.
Leviathan or The Matter, Forme and Power of a Common-Wealth Ecclesiasticall and Civil—commonly referred to as Leviathan—is a book written by Thomas Hobbes (1588–1679) and published in 1651. Its name derives from the biblical Leviathan. The work concerns the structure of society and legitimate government, and is regarded as one of the earliest and most influential examples of social contract theory. Leviathan ranks as a classic Western work on statecraft comparable to Machiavelli's The Prince. Written during the English Civil War (1642–1651), Leviathan argues for a social contract and rule by an absolute sovereign. Hobbes wrote that civil war and the brute situation of a state of nature could only be avoided by strong, undivided government.
Thomas Hobbes, in some older texts Thomas Hobbes of Malmesbury, was an English philosopher, considered to be one of the founders of modern political philosophy. Hobbes is best known for his 1651 book Leviathan, which expounded an influential formulation of social contract theory. In addition to political philosophy, Hobbes also contributed to a diverse array of other fields, including history, jurisprudence, geometry, the physics of gases, theology, ethics, and general philosophy.
René Descartes was a French philosopher, mathematician, and scientist. A native of the Kingdom of France, he spent about 20 years (1629–1649) of his life in the Dutch Republic after serving for a while in the Dutch States Army of Maurice of Nassau, Prince of Orange and the Stadtholder of the United Provinces. One of the most notable intellectual figures of the Dutch Golden Age, Descartes is also widely regarded as one of the founders of modern philosophy.
Descartes was a substance dualist, and argued that reality was composed of two radically different types of substance: extended matter, on the one hand, and immaterial mind, on the other. Descartes argued that one cannot explain the conscious mind in terms of the spatial dynamics of mechanistic bits of matter cannoning off each other. Nevertheless, his understanding of biology was thoroughly mechanistic in nature:
His scientific work was based on the traditional mechanistic understanding that animals and humans are completely mechanistic automata. Descartes' dualism was motivated by the seeming impossibility that mechanical dynamics could yield mental experiences.
Isaac Newton ushered in a much weaker acceptation of mechanism that tolerated the antithetical, and as yet inexplicable, action at a distance of gravity. However, his work seemed to successfully predict the motion of both celestial and terrestrial bodies according to that principle, and the generation of philosophers who were inspired by Newton's example carried the mechanist banner nonetheless. Chief among them were French philosophers such as Julien Offray de La Mettrie and Denis Diderot (see also: French materialism).
The thesis in anthropic mechanism is not that everything can be completely explained in mechanical terms (although some anthropic mechanists may also believe that), but rather that everything about human beings can be completely explained in mechanical terms, as surely as can everything about clocks or the internal combustion engine.
One of the chief obstacles that all mechanistic theories have faced is providing a mechanistic explanation of the human mind; Descartes, for one, endorsed dualism in spite of endorsing a completely mechanistic conception of the material world because he argued that mechanism and the notion of a mind were logically incompatible. Hobbes, on the other hand, conceived of the mind and the will as purely mechanistic, completely explicable in terms of the effects of perception and the pursuit of desire, which in turn he held to be completely explicable in terms of the materialistic operations of the nervous system. Following Hobbes, other mechanists argued for a thoroughly mechanistic explanation of the mind, with one of the most influential and controversial expositions of the doctrine being offered by Julien Offray de La Mettrie in his Man a Machine (1748).
The main points of debate between anthropic mechanists and anti-mechanists are mainly occupied with two topics: the mind — and consciousness, in particular — and free will. Anti-mechanists argue that anthropic mechanism is incompatible with our commonsense intuitions: in philosophy of mind they argue that if matter is devoid of mental properties then the phenomenon of consciousness cannot be explained by mechanistic principles acting on matter. In metaphysics anti-mechanists argue that anthropic mechanism implies determinism about human action, which is incompatible with our experience of free will. Contemporary philosophers who have argued for this position include Norman Malcolm and David Chalmers.
Anthropic mechanists typically respond in one of two ways. In the first, they agree with anti-mechanists that mechanism conflicts with some of our commonsense intuitions, but go on to argue that our commonsense intuitions are simply mistaken and need to be revised. Down this path lies eliminative materialism in philosophy of mind, and hard determinism on the question of free will. This option is accepted by the eliminative materialist philosopher Paul Churchland. Some have questioned how eliminative materialism is compatible with the freedom of will apparently required for anyone (including its adherents) to make truth claims.The second option, common amongst philosophers who adopt anthropic mechanism, is to argue that the arguments given for incompatibility are specious: whatever it is we mean by "consciousness" and "free will," they urge, it is fully compatible with a mechanistic understanding of the human mind and will. As a result, they tend to argue for one or another non-eliminativist physicalist theories of mind, and for compatibilism on the question of free will. Contemporary philosophers who have argued for this sort of account include J. J. C. Smart and Daniel Dennett.
Some scholars have debated over what, if anything, Gödel's incompleteness theorems imply about anthropic mechanism. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.
Gödelian arguments claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent and powerful enough to recognize its own consistency. Since this is impossible for a Turing machine, the Gödelian concludes that human reasoning must be non-mechanical.
However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument against mechanism.This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence : "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."
One of the earliest attempts to use incompleteness to reason about human intelligence was by Gödel himself in his 1951 Gibbs Lecture entitled "Some basic theorems on the foundations of mathematics and their philosophical implications".In this lecture, Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact".
In subsequent years, more direct anti-mechanist lines of reasoning were apparently floating around the intellectual atmosphere. In 1960, Hilary Putnam published a paper entitled "Minds and Machines," in which he points out the flaws of a typical anti-mechanist argument.Informally, this is the argument that the (alleged) difference between "what can be mechanically proven" and "what can be seen to be true by humans" shows that human intelligence is not mechanical in nature. Or, as Putnam puts it:
Let T be a Turing machine which "represents" me in the sense that T can prove just the mathematical statements I prove. Then using Gödel's technique I can discover a proposition that T cannot prove, and moreover I can prove this proposition. This refutes the assumption that T "represents" me, hence I am not a Turing machine.
Hilary Putnam objects that this argument ignores the issue of consistency. Gödel's technique can only be applied to consistent systems. It is conceivable, argues Putnam, that the human mind is inconsistent. If one is to use Gödel's technique to prove the proposition that T cannot prove, one must first prove (the mathematical statement representing) the consistency of T, a daunting and perhaps impossible task. Later Putnam suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. If we are to believe that it is consistent, then either we cannot prove its consistency, or it cannot be represented by a Turing machine.
J. R. Lucas in Minds, Machines and Gödel (1961), and later in his book The Freedom of the Will (1970), lays out an anti-mechanist argument closely following the one described by Putnam, including reasons for why the human mind can be considered consistent.Lucas admits that, by Gödel's second theorem, a human mind cannot formally prove its own consistency, and even says (perhaps facetiously) that women and politicians are inconsistent. Nevertheless, he sets out arguments for why a male non-politician can be considered consistent. These arguments are philosophical in nature and are the subject of much debate; Lucas provides references to responses on his own website.
Another work was done by Judson Webb in his 1968 paper "Metamathematics and the Philosophy of Mind".Webb claims that previous attempts have glossed over whether one truly can see that the Gödelian statement p pertaining to oneself, is true. Using a different formulation of Gödel's theorems, namely, that of Raymond Smullyan and Emil Post, Webb shows one can derive convincing arguments for oneself of both the truth and falsity of p. He furthermore argues that all arguments about the philosophical implications of Gödel's theorems are really arguments about whether the Church-Turing thesis is true.
Later, Roger Penrose entered the fray, providing somewhat novel anti-mechanist arguments in his books, The Emperor's New Mind (1989) [ENM] and Shadows of the Mind (1994) [SM]. These books have proved highly controversial. Martin Davis responded to ENM in his paper "Is Mathematical Insight Algorithmic?" (ps), where he argues that Penrose ignores the issue of consistency. Solomon Feferman gives a critical examination of SM in his paper "Penrose's Gödelian argument" (pdf).[ citation needed ] The response of the scientific community to Penrose's arguments has been negative, with one group of scholars calling Penrose's repeated attempts to form a persuasive Gödelian argument "a kind of intellectual shell game, in which a precisely defined notion to which a mathematical result applies... is switched for a vaguer notion".
A Gödel-based anti-mechanism argument can be found in Douglas Hofstadter's book Gödel, Escher, Bach: An Eternal Golden Braid , though Hofstadter is widely viewed as a known skeptic of such arguments:
Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, no matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true.
What might such high-level concepts be? It has been proposed for eons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone
(Gödel, Escher, Bach, p. 708).
In computability theory, the Church–Turing thesis is a hypothesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method, if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:
Sir Roger Penrose is an English mathematical physicist, mathematician and philosopher of science. He is Emeritus Rouse Ball Professor of Mathematics at the University of Oxford and an emeritus fellow of Wadham College, Oxford.
The philosophy of mathematics is the branch of philosophy that studies the assumptions, foundations, and implications of mathematics. It purports to provide a viewpoint of the nature and methodology of mathematics, and to understand the place of mathematics in people's lives. The logical and structural nature of mathematics itself makes this study both broad and unique among its philosophical counterparts.
Gödel's incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
Functionalism is a viewpoint of the theory of the mind. It states that mental states are constituted solely by their functional role, which means, their causal relations with other mental states, sensory inputs and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.
John Randolph Lucas is a British philosopher.
"Minds, Machines and Gödel" is J. R. Lucas's 1959 philosophical paper in which he argues that a human mathematician cannot be accurately represented by an algorithmic automaton. Appealing to Gödel's incompleteness theorem, he argues that for any such automaton, there would be some mathematical formula which it could not prove, but which the human mathematician could both see, and show, to be true.
George Stephen Boolos was an American philosopher and a mathematical logician who taught at the Massachusetts Institute of Technology.
The mechanical philosophy is a form of natural philosophy which compares the universe to a large-scale mechanism. The mechanical philosophy is associated with the scientific revolution of Early Modern Europe. One of the first expositions of universal mechanism is found in the opening passages of Leviathan by Hobbes published in 1651.
"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.
The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by mathematical physicist Sir Roger Penrose.
Orchestrated objective reduction is a biological philosophy of mind that postulates that consciousness originates at the quantum level inside neurons, rather than the conventional view that it is a product of connections between neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. It is proposed that the theory may answer the hard problem of consciousness and provide a mechanism for free will. The hypothesis was first put forward in the early 1990s by theoretical physicist Roger Penrose and anaesthesiologist and psychologist Stuart Hameroff. The hypothesis combines approaches from molecular biology, neuroscience, quantum physics, pharmacology, philosophy, quantum information theory, and quantum gravity.
Certainty is perfect knowledge that has total security from error, or the mental state of being without doubt.
Artificial intelligence has close connections with philosophy because both share several concepts and these include intelligence, action, consciousness, epistemology, and even free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.
In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher and cognitive scientist Jerry Fodor in the 1960s, 1970s and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy.
Shadows of the Mind: A Search for the Missing Science of Consciousness is a 1994 book by mathematical physicist Roger Penrose that serves as a followup to his 1989 book The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics.
The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing.
In the philosophy of mathematics, formalism is the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings using established manipulation rules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess." According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other contensive subject matter — in fact, they aren't "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation. In contrast to logicism or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist.
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever.
The Penrose–Lucas argument is a logical argument partially based on a theory developed by mathematician and logician Kurt Gödel. In 1931, he proved that every effectively generated theory capable of proving basic arithmetic fails to be either consistent or complete. Mathematician Roger Penrose modified the argument in his first book on consciousness, The Emperor's New Mind (1989), where he used it to provide the basis of the theory of orchestrated objective reduction.
These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.
...even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.