Brain in a vat

Last updated

A brain in a vat that believes it is walking Braininvat.jpg
A brain in a vat that believes it is walking

In philosophy, the brain in a vat (BIV) is a scenario used in a variety of thought experiments intended to draw out certain features of human conceptions of knowledge, reality, truth, mind, consciousness, and meaning. Originated by Gilbert Harman, [1] Hilary Putnam turned the scenario into a modernized version of René Descartes's evil demon thought experiment. Following many science fiction stories, the scenario involves a mad scientist that might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer that would provide it with electrical impulses identical to those a brain normally receives. [2] According to such stories, the computer would then be simulating reality (including appropriate responses to the brain's own output) and the "disembodied" brain would continue to have perfectly normal conscious experiences, such as those of a person with an embodied brain, without these being related to objects or events in the real world. According to Putnam, the thought of "being a brain-in-a-vat" (BIV) is either false or meaningless. Considered a cornerstone of Semantic externalism, the argument produced significant literature. The Matrix franchise and other fictional works (below) are considered inspired by Putnam's argument. [3]

Contents

Intuitive version

Putnam's argument is based on the causal theory of reference, where a word describing a spatio-temporal object is meaningful if and only if it possesses an information-carrying causal relation to whatever it denotes. Next, an "envatted" brain is one whose entire world is composed of (say) electric manipulations performed by a computer simulation to which it is connected. With this much in place, consider the sentence "I am a brain in a vat" (BIV). In case you are not a brain in a vat, the sentence is false by definition. In case you are a brain in a vat, the terms "brain" and "vat" fail to denote actual brains and actual vats with whom you had an information-carrying causal interaction since, again by definition, the only interaction available is with the computer simulation, which is not information carrying. By the causal theory of reference, such references do not carry referential meaning. Thus, the sentence "I am a brain in a vat" is either false or meaningless. [2]

Uses

The simplest use of brain-in-a-vat scenarios is as an argument for philosophical skepticism [4] and solipsism. A simple version of this runs as follows: since the brain in a vat gives and receives exactly the same impulses as it would if it were in a skull, and since these are its only way of interacting with its environment, then it is not possible to tell, from the perspective of that brain, whether it is in a skull or a vat. Yet in the first case, most of the person's beliefs may be true (if they believe, say, that they are walking down the street, or eating ice-cream); in the latter case, their beliefs are false. Since the argument says if one cannot know whether one is a brain in a vat, then one cannot know whether most of one's beliefs might be completely false. Since, in principle, it is impossible to rule out oneself being a brain in a vat, there cannot be good grounds for believing any of the things one believes; a skeptical argument would contend that one certainly cannot know them, raising issues with the definition of knowledge. Other philosophers have drawn upon sensation and its relationship to meaning in order to question whether brains in vats are really deceived at all, [5] thus raising wider questions concerning perception, metaphysics, and the philosophy of language.

The brain-in-a-vat is a contemporary version of the argument given in Hindu Maya illusion, Plato's Allegory of the Cave, Zhuangzi's "Zhuangzi dreamed he was a butterfly", and the evil demon in René Descartes' Meditations on First Philosophy .

Recently, many contemporary philosophers believe that virtual reality will seriously affect human autonomy as a form of brain in a vat. But another view is that VR will not destroy our cognitive structure or take away our connection with reality. On the contrary, VR will allow us to have more new propositions, new insights and new perspectives to see the world. [6]

Philosophical debates

While the disembodied brain (the brain in a vat) can be seen as a helpful thought experiment, there are several philosophical debates surrounding the plausibility of the thought experiment. If these debates conclude that the thought experiment is implausible, a possible consequence would be that we are no closer to knowledge, truth, consciousness, representation, etc. than we were prior to the experiment.

Argument from biology

A human brain in jar Human brain in a vat.jpg
A human brain in jar

One argument against the BIV thought experiment derives from the idea that the BIV is not – and cannot be – biologically similar to that of an embodied brain (that is, a brain found in a person). Since the BIV is disembodied, it follows that it does not have similar biology to that of an embodied brain. That is, the BIV lacks the connections from the body to the brain, which renders the BIV neither neuroanatomically nor neurophysiologically similar to that of an embodied brain. [7] [8] If this is the case, we cannot say that it is even possible for the BIV to have similar experiences to the embodied brain, since the brains are not equal. However, it could be counter-argued that the hypothetical machine could be made to also replicate those types of inputs.

Argument from externalism

A second argument deals directly with the stimuli coming into the brain. This is often referred to as the account from externalism or ultra-externalism. [9] In the BIV, the brain receives stimuli from a machine. In an embodied brain, however, the brain receives the stimuli from the sensors found in the body (via touching, tasting, smelling, etc.) which receive their input from the external environment. This argument oftentimes leads to the conclusion that there is a difference between what the BIV is representing and what the embodied brain is representing. This debate has been hashed out, but remains unresolved, by several philosophers including Uriah Kriegel, [10] Colin McGinn, [11] and Robert D. Rupert, [12] and has ramifications for philosophy of mind discussions on (but not limited to) representation, consciousness, content, cognition, and embodied cognition. [13]

Argument from incoherence

A third argument against BIV comes from a direction of incoherence, which was presented by the philosopher Hilary Putnam. He attempts to demonstrate this through the usage of a transcendental argument, in which he tries to illustrate that the thought experiment's incoherence lies on the basis that it is self-refuting. [14] To do this, Putnam first established a relationship that he refers to as a "causal connection" which is sometimes referred to as "a causal constraint". [15] [2] This relationship is further defined, through a theory of reference that suggested reference can not be assumed, and words are not automatically intrinsically connected with what it represents. This theory of reference would later become known as semantic externalism. This concept is further illustrated when Putnam establishes a scenario in which a monkey types out Hamlet by chance; however, this does not mean that the monkey is referring to the play, because the monkey has no knowledge of Hamlet and therefore can not refer back to it. [16] He then offers the "Twin Earth" example to demonstrate that two identical individuals, one on the Earth and another on a "twin Earth", may possess the exact same mental state and thoughts, yet refer to two different things. [17] For instance, when people think of cats, the referent of their thoughts would be the cats that are found on Earth. However, people's twins on twin Earth, though possessing the same thoughts, would instead be referring not to Earth's cats, but to twin Earth's cats. Bearing this in mind, he writes that a "pure" brain in a vat, i.e., one that has never existed outside of the simulation, could not even truthfully say that it was a brain in a vat. This is because the BIV, when it says "brain" and "vat", can only refer to objects within the simulation, not to things outside the simulation it does not have a relationship with. Putnam refers to this relationship as a "causal connection" which is sometimes referred to as "a causal constraint". [15] [2] Therefore, what it says is demonstrably false. Alternatively, if the speaker is not actually a BIV, then the statement is also false. He concludes, then, that the statement "I'm a BIV" is necessarily false and self-refuting. [17] This argument has been explored at length in philosophical literature since its publication. A potential loophole in Putnam‘s reference theory is that a brain on Earth that is "kidnapped", placed into a vat, and subjected to a simulation could still refer to brains and vats which are real in the sense of Putnam, and thus correctly say it is a brain in a vat according to Putnamian reference theory. [18] However, the notion that the "pure" BIV is incorrect and the reference theory underpinning it remains influential in the philosophy of mind, language and metaphysics. [19] [20] Anthony L. Brueckner has formulated an extension of Putnam‘s argument which rules out this loophole by employing a disquotational principle. It will be discussed in the following two sections.

Reconstructions of Putnam's argument

An issue that has arisen with Putnam's argument is that his premises only imply the metalinguistic statement that "my utterances of 'I am a BIV' are false". But a skeptic may demand the object-language statement 'I am a BIV' to be proven. [21] [22] In order to combat this issue, various philosophers have taken on the task of reconstructing Putnam's argument. Some philosophers like Anthony L. Brueckner and Crispin Wright have taken on approaches that utilize disquotational principles. [21] [15] While others like Ted A. Warfield have taken on approaches that focus on the concepts of self-knowledge and priori. [22]

The Disjunctive Argument

One of the earliest but influential reconstructions of Putnam's transcendental argument was suggested by Anthony L. Brueckner. Brueckner's reconstruction is as follows: "(1) Either I am a BIV (speaking vat-English) or I am a non-BIV (speaking English). (2) If I am a BIV (speaking vat-English), then my utterances of 'I am a BIV' are true if I have sense impressions as of being a BIV. (3) If I am a BIV (speaking vat-English), then I do not have sense impressions as of being a BIV. (4) If I am a BIV (speaking vat-English), then my utterances of 'I am a BIV' are false. [(2), (3)] (5) If I am a non-BIV (speaking English), then my utterances of 'I am a BIV' are true if I am a BIV. (6) If I am a non-BIV (speaking English), then my utterances of 'I am a BIV' are false. [(5)] (7) My utterances of 'I am a BIV' are false. [(1), (4), (6)]" [21] A key thing to note is that although these premises further define Putnam's argument, they do not so far prove 'I am not a BIV', because, although the premises imply the meta-linguistic statement that "my utterances 'I am a BIV' are false", they do not yet imply the object language statement that 'I am not a BIV'. In order to achieve the Putnamian conclusion, Brueckner thus further strengthens his argument by employing the disquotational principle of "My utterances of 'I am not a BIV' are true if I am not a BIV." This statement is justified since the metalanguage which contains the tokens for the disquotational principle also contains the object language tokens to which the utterances 'I am not a BIV' belong. [21]

A poster for the film The Brain That Wouldn't Die, 1962 Brainthatwouldntdie film poster.jpg
A poster for the film The Brain That Wouldn't Die, 1962

In fiction

See also

Related Research Articles

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

Internalism and externalism are two opposite ways of integration of explaining various subjects in several areas of philosophy. These include human motivation, knowledge, justification, meaning, and truth. The distinction arises in many areas of debate with similar but distinct meanings. Internal–external distinction is a distinction used in philosophy to divide an ontology into two parts: an internal part concerning observation related to philosophy, and an external part concerning question related to philosophy.

<span class="mw-page-title-main">Hilary Putnam</span> American mathematician and philosopher (1926–2016)

Hilary Whitehall Putnam was an American philosopher, mathematician, and computer scientist and figure in analytic philosophy in the second half of the 20th century. He contributed to the studies of philosophy of mind, philosophy of language, philosophy of mathematics, and philosophy of science. Outside philosophy, Putnam contributed to mathematics and computer science. Together with Martin Davis he developed the Davis–Putnam algorithm for the Boolean satisfiability problem and he helped demonstrate the unsolvability of Hilbert's tenth problem.

<span class="mw-page-title-main">Twin Earth thought experiment</span> Thought experiment proposed by Hilary Putnam

Twin Earth is a thought experiment proposed by philosopher Hilary Putnam in his papers "Meaning and Reference" (1973) and "The Meaning of 'Meaning'" (1975). It is meant to serve as an illustration of his argument for semantic externalism, or the view that the meanings of words are not purely psychological. The Twin Earth thought experiment was one of three examples that Putnam offered in support of semantic externalism, the other two being what he called the Aluminum-Molybdenum case and the Beech-Elm case. Since the publication of these cases, numerous variations on the thought experiment have been proposed by philosophers.

In the philosophy of mind, functionalism is the thesis that each and every mental state is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.

Eliminative materialism is a materialist position in the philosophy of mind. It is the idea that the majority of mental states in folk psychology do not exist. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. The argument is that psychological concepts of behavior and experience should be judged by how well they reduce to the biological level. Other versions entail the nonexistence of conscious mental states such as pain and visual perceptions.

<span class="mw-page-title-main">Hard problem of consciousness</span> Philosophical concept, first stated by David Chalmers in 1995

In philosophy of mind, the hard problem of consciousness is to explain why and how humans and other organisms have qualia, phenomenal consciousness, or subjective experiences. It is contrasted with the "easy problems" of explaining why and how physical systems give a (healthy) human being the ability to discriminate, to integrate information, and to perform behavioral functions such as watching, listening, speaking, and so forth. The easy problems are amenable to functional explanation: that is, explanations that are mechanistic or behavioral, as each physical system can be explained purely by reference to the "structure and dynamics" that underpin the phenomenon.

A causal theory of reference or historical chain theory of reference is a theory of how terms acquire specific referents based on evidence. Such theories have been used to describe many referring terms, particularly logical terms, proper names, and natural kind terms. In the case of names, for example, a causal theory of reference typically involves the following claims:

<span class="mw-page-title-main">Evil demon</span> Concept in Cartesian philosophy

The evil demon, also known as Deus deceptor, malicious demon, and evil genius, is an epistemological concept that features prominently in Cartesian philosophy. In the first of his 1641 Meditations on First Philosophy, Descartes imagines that a malevolent God or an evil demon, of "utmost power and cunning has employed all his energies in order to deceive me." This malevolent God or evil demon is imagined to present a complete illusion of an external world, so that Descartes can say, "I shall think that the sky, the air, the earth, colours, shapes, sounds and all external things are merely the delusions of dreams which he has devised to ensnare my judgement. I shall consider myself as not having hands or eyes, or flesh, or blood or senses, but as falsely believing that I have all these things."

A philosophical zombie is a being in a thought experiment in philosophy of mind that is physically identical to a normal person but does not have conscious experience.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

In the philosophy of language, semantic externalism is the view that the meaning of a term is determined, in whole or in part, by factors external to the speaker. According to an externalist position, one can claim without contradiction that two speakers could be in exactly the same brain state at the time of an utterance, and yet mean different things by that utterance -- that is, at the least, that their terms could pick out different referents.

<span class="mw-page-title-main">China brain</span> Philosophical experiment

In the philosophy of mind, the China brain thought experiment considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?

In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. It was vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others.

Philosophy of mind is a branch of philosophy that studies the ontology and nature of the mind and its relationship with the body. The mind–body problem is a paradigmatic issue in philosophy of mind, although a number of other issues are addressed, such as the hard problem of consciousness and the nature of particular mental states. Aspects of the mind that are studied include mental events, mental functions, mental properties, consciousness and its neural correlates, the ontology of the mind, the nature of cognition and of thought, and the relationship of the mind to the body.

A self-refuting idea or self-defeating idea is an idea or statement whose falsehood is a logical consequence of the act or situation of holding them to be true. Many ideas are called self-refuting by their detractors, and such accusations are therefore almost always controversial, with defenders stating that the idea is being misunderstood or that the argument is invalid. For these reasons, none of the ideas below are unambiguously or incontrovertibly self-refuting. These ideas are often used as axioms, which are definitions taken to be true, and cannot be used to test themselves, for doing so would lead to only two consequences: consistency or exception (self-contradiction).

The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.

<span class="mw-page-title-main">Qualia</span> Instances of subjective experience

In philosophy of mind, qualia are defined as instances of subjective, conscious experience. The term qualia derives from the Latin neuter plural form (qualia) of the Latin adjective quālis meaning "of what sort" or "of what kind" in relation to a specific instance, such as "what it is like to taste a specific apple — this particular apple now".

Externalism is a group of positions in the philosophy of mind which argues that the conscious mind is not only the result of what is going on inside the nervous system, but also what occurs or exists outside the subject. It is contrasted with internalism which holds that the mind emerges from neural activity alone. Externalism is a belief that the mind is not just the brain or functions of the brain.

References

  1. Harman, Gilbert 1973: Thought, Princeton/NJ, p.5.
  2. 1 2 3 4 Putnam, Hilary. "Brains in a Vat" (PDF). Archived from the original (PDF) on 6 October 2021. Retrieved 21 April 2015.{{cite journal}}: Cite journal requires |journal= (help)
  3. Chalmers, David J (1 September 2004), "The Matrix As Metaphysics", Philosophers Explore The Matrix, Oxford University PressNew York, NY, pp. 132–176, ISBN   978-0-19-518106-7 , retrieved 20 November 2023
  4. Klein, Peter (2 June 2015). "Skepticism". Stanford Encyclopedia of Philosophy . Retrieved 7 January 2017.
  5. Bouwsma, O.K. (1949). "Descartes' Evil Genius" (PDF). The Philosophical Review. 58 (2): 149–151. doi:10.2307/2181388. JSTOR   2181388 via JSTOR.
  6. Cogburn, Jon; Silcox, Mark (2014). "Against Brain-in-a-Vatism: On the Value of Virtual Reality". Philosophy & Technology. 27 (4): 561–579. doi:10.1007/s13347-013-0137-4. ISSN   2210-5433. S2CID   143774123.
  7. Heylighen, Francis (2012). "A Brain in a Vat Cannot Break Out: Why the Singularity Must be Extended, Embedded, and Embodied". Journal of Consciousness Studies . 19 (1–2): 126–142.
  8. Thompson, Evan; Cosmelli, Diego (Spring 2011). "Brain in a Vat or Body in a World? Brainbound versus Enactive Views of Experience". Philosophical Topics . 39 (1): 163–180. doi:10.5840/philtopics201139119. S2CID   170332029.
  9. Kirk, Robert (1997). "Consciousness, Information and External Relations". Communication and Cognition. 30 (3–4).
  10. Kriegel, Uriah (2014). Current Controversies in Philosophy of Mind. Routledge. pp. 180–95.
  11. McGinn, Colin (1988). "Consciousness and Content". Proceedings of the British Academy. 76: 219–39.
  12. Rupert, Robert (2014). The Sufficiency of Objective Representation. Routledge. pp. 180–95.{{cite book}}: |work= ignored (help)
  13. Shapiro, Lawrence (2014). When Is Cognition Embodied. Routledge. pp. 73–90.{{cite book}}: |work= ignored (help)
  14. CHEN, Jiaming; Lin, Zhang (2012). "On the Issues of Transcendental Argument". Frontiers of Philosophy in China. 7 (2): 255–269. ISSN   1673-3436. JSTOR   44259404.
  15. 1 2 3 Wright, Crispin (1992). "On Putnam's Proof That We Are Not Brains-in-a-Vat". Proceedings of the Aristotelian Society. 92: 67–94. doi:10.1093/aristotelian/92.1.67. ISSN   0066-7374. JSTOR   4545146.
  16. Brueckner, Tony (2016), Goldberg, Sanford C (ed.), "Putnam on brains in a vat", The Brain in a Vat, Cambridge: Cambridge University Press, pp. 19–26, doi:10.1017/cbo9781107706965.002, ISBN   9781107706965 , retrieved 23 September 2021
  17. 1 2 Putnam, Hilary (1981). Reason, Truth, and History. Cambridge: Cambridge University Press. pp. 14, 18–19. ISBN   978-0-52129776-9.
  18. Tymoczko, Thomas (1989). "In Defense of Putnam's Brains". Philosophical Studies. 57 (3): 294–295. doi:10.1007/BF00372698. JSTOR   4320079. S2CID   170928278 via JSTOR.
  19. Heil, John (2001). A Companion to Analytic Philosophy. Blackwell Publishers. pp. 404–412. ISBN   9780470998656.
  20. Pritchard, Duncan. "Putnam on Radical Skepticism: Wittgenstein, Cavell, and Occasion-Sensitive Semantics" (PDF). Engaging Putnam: 1–2.
  21. 1 2 3 4 Brueckner, Anthony L. (1986). "Brains in a Vat". The Journal of Philosophy. 83 (3): 148–167. doi:10.2307/2026572. ISSN   0022-362X. JSTOR   2026572.
  22. 1 2 Warfield, Ted A. (1995). "Knowing the World and Knowing Our Minds". Philosophy and Phenomenological Research. 55 (3): 525–545. doi:10.2307/2108437. ISSN   0031-8205. JSTOR   2108437.
  23. "The Colossus of New York (1958)". monsterhuntermoviereviews.com. MonsterHunter. 27 September 2013. Retrieved 11 March 2018. It turns out that Jeremy's brain was sitting in a glass case of water hooked up to an EEG machine which led me to believe that they must have had some kind of clearance sale on set leftovers from Donovan's Brain. (with photo).
Philosophy
Science