Computational theory of mind

Last updated

In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. [1] The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. [2] [3] It was vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others.

Contents

The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation. [3] For example, the appropriate computation could be implemented either by silicon chips or biological neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system. [3]

Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity. [3] In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states).

Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the computational theory of cognition (CTC) states that neural computations explain cognition. [1] The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC, therefore, provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness.

"Computer metaphor"

Computational theory of mind is not the same as the computer metaphor, comparing the mind to a modern-day digital computer. [4] Computational theory just uses some of the same principles as those found in digital computing. [4] While the computer metaphor draws an analogy between the mind as software and the brain as hardware, CTM is the claim that the mind is a computational system. More specifically, it states that a computational simulation of a mind is sufficient for the actual presence of a mind, and that a mind truly can be simulated computationally.

'Computational system' is not meant to mean a modern-day electronic computer. Rather, a computational system is a symbol manipulator that follows step-by-step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing machine.

Early proponents

One of the earliest proponents of the computational theory of mind was Thomas Hobbes who said, "by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason, therefore, is the same as to add or to subtract." [5] Since Hobbes lived before the contemporary identification of computing with instantiating effective procedures, he cannot be interpreted as explicitly endorsing the computational theory of mind, in the contemporary sense.

Criticism

A range of arguments have been proposed against physicalist conceptions used in computational theories of mind.

An early, though indirect, criticism of the computational theory of mind comes from philosopher John Searle. In his thought experiment known as the Chinese room, Searle attempts to refute the claims that artificially intelligent agents can be said to have intentionality and understanding and that these systems, because they can be said to be minds themselves, are sufficient for the study of the human mind. [6] Searle asks us to imagine that there is a man in a room with no way of communicating with anyone or anything outside of the room except for a piece of paper with symbols written on it that is passed under the door. With the paper, the man is to use a series of provided rule books to return paper containing different symbols. Unknown to the man in the room, these symbols are of a Chinese language, and this process generates a conversation that a Chinese speaker outside of the room can actually understand. Searle contends that the man in the room does not understand the Chinese conversation. This is essentially what the computational theory of mind presents us—a model in which the mind simply decodes symbols and outputs more symbols. Searle argues that this is not real understanding or intentionality. This was originally written as a repudiation of the idea that computers work like minds.

Searle has further raised questions about what exactly constitutes a computation:

the wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain. [7]

Objections like Searle's might be called insufficiency objections. They claim that computational theories of mind fail because computation is insufficient to account for some capacity of the mind. Arguments from qualia, such as Frank Jackson's knowledge argument, can be understood as objections to computational theories of mind in this way—though they take aim at physicalist conceptions of the mind in general, and not computational theories specifically.[ citation needed ]

There are also objections which are directly tailored for computational theories of mind.

Putnam himself (see in particular Representation and Reality and the first part of Renewing Philosophy) became a prominent critic of computationalism for a variety of reasons, including ones related to Searle's Chinese room arguments, questions of world-word reference relations, and thoughts about the mind-body problem. Regarding functionalism in particular, Putnam has claimed along lines similar to, but more general than Searle's arguments, that the question of whether the human mind can implement computational states is not relevant to the question of the nature of mind, because "every ordinary open system realizes every abstract finite automaton." [8] Computationalists have responded by aiming to develop criteria describing what exactly counts as an implementation. [9] [10] [11]

Roger Penrose has proposed the idea that the human mind does not use a knowably sound calculation procedure to understand and discover mathematical intricacies. This would mean that a normal Turing complete computer would not be able to ascertain certain mathematical truths that human minds can. [12]

Pancomputationalism

CTM raises a question that remains a subject of debate: what does it take for a physical system (such as a mind, or an artificial computer) to perform computations? A very straightforward account is based on a simple mapping between abstract mathematical computations and physical systems: a system performs computation C if and only if there is a mapping between a sequence of states individuated by C and a sequence of states individuated by a physical description of the system. [13] [8]

Putnam (1988) and Searle (1992) argue that this simple mapping account (SMA) trivializes the empirical import of computational descriptions. [8] [14] As Putnam put it, “everything is a Probabilistic Automaton under some Description”. [15]  Even rocks, walls, and buckets of water—contrary to appearances—are computing systems. Gualtiero Piccinini identifies different versions of Pancomputationalism. [16]

In response to the trivialization criticism, and to restrict SMA, philosophers of mind have offered different accounts of computational systems. These typically include causal account, semantic account, syntactic account, and mechanistic account. [17] Instead of a semantic restriction, the syntactic account imposes a syntactic restriction. [17] The mechanistic account was first introduced by Gualtiero Piccinini in 2007. [18]

Notable theorists

Alternative theories

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms.

<span class="mw-page-title-main">Chinese room</span> Thought experiment on artificial intelligence by John Searle

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

In the philosophy of mind, functionalism is the thesis that each and every mental state is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.

<span class="mw-page-title-main">Connectionism</span> Cognitive science approach

Connectionism is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' since its beginnings.

<span class="mw-page-title-main">Jerry Fodor</span> American philosopher (1935–2017)

Jerry Alan Fodor was an American philosopher and the author of many crucial works in the fields of philosophy of mind and cognitive science. His writings in these fields laid the groundwork for the modularity of mind and the language of thought hypotheses, and he is recognized as having had "an enormous influence on virtually every portion of the philosophy of mind literature since 1960." At the time of his death in 2017, he held the position of State of New Jersey Professor of Philosophy, Emeritus, at Rutgers University, and had taught previously at the City University of New York Graduate Center and MIT.

Modularity of mind is the notion that a mind may, at least in part, be composed of innate neural structures or mental modules which have distinct, established, and evolutionarily developed functions. However, different definitions of "module" have been proposed by different authors. According to Jerry Fodor, the author of Modularity of Mind, a system can be considered 'modular' if its functions are made of multiple dimensions or units to some degree. One example of modularity in the mind is binding. When one perceives an object, they take in not only the features of an object, but the integrated features that can operate in sync or independently that create a whole. Instead of just seeing red, round, plastic, and moving, the subject may experience a rolling red ball. Binding may suggest that the mind is modular because it takes multiple cognitive processes to perceive one thing.

The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure. On this view, simple concepts combine in systematic ways to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.

Neurophilosophy or philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, linguistics, computer science, anthropology, neuroscience, and philosophy. The approaches used were developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. In the 1960s, the Harvard Center for Cognitive Studies and the Center for Human Information Processing at the University of California, San Diego were influential in developing the academic study of cognitive science. By the early 1970s, the cognitive movement had surpassed behaviorism as a psychological paradigm. Furthermore, by the early 1980s the cognitive approach had become the dominant line of research inquiry across most branches in the field of psychology.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">Philosophy of artificial intelligence</span> Overview of the philosophy of artificial intelligence

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words get their meanings, and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.

Neural computation is the information processing performed by networks of neurons. Neural computation is affiliated with the philosophical tradition known as Computational theory of mind, also referred to as computationalism, which advances the thesis that neural computation explains cognition. The first persons to propose an account of neural activity as being computational was Warren McCullock and Walter Pitts in their seminal 1943 paper, A Logical Calculus of the Ideas Immanent in Nervous Activity. There are three general branches of computationalism, including classicism, connectionism, and computational neuroscience. All three branches agree that cognition is computation, however, they disagree on what sorts of computations constitute cognition. The classicism tradition believes that computation in the brain is digital, analogous to digital computing. Both connectionism and computational neuroscience do not require that the computations that realize cognition are necessarily digital computations. However, the two branches greatly disagree upon which sorts of experimental data should be used to construct explanatory models of cognitive phenomena. Connectionists rely upon behavioral evidence to construct models to explain cognitive phenomena, whereas computational neuroscience leverages neuroanatomical and neurophysiological information to construct mathematical models that explain cognition.

Multiple realizability, in the philosophy of mind, is the thesis that the same mental property, state, or event can be implemented by different physical properties, states, or events.

A mental representation, in philosophy of mind, cognitive psychology, neuroscience, and cognitive science, is a hypothetical internal cognitive symbol that represents external reality or its abstractions.

<span class="mw-page-title-main">Gualtiero Piccinini</span> Italian–American philosopher (born 1970)

Gualtiero Piccinini is an Italian–American philosopher known for his work on the nature of mind and computation as well as on how to integrate psychology and neuroscience. He is Curators' Distinguished Professor in the Philosophy Department and Associate Director of the Center for Neurodynamics at the University of Missouri, St. Louis.

References

  1. 1 2 Piccinini, Gualtierro & Bahar, Sonya, 2012. "Neural Computation and the Computational Theory of Cognition" in Cognitive Science. https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.12012
  2. Putnam, Hilary, 1961. "Brains and Behavior", originally read as part of the program of the American Association for the Advancement of Science, Section L (History and Philosophy of Science), December 27, 1961, reprinted in Block (1983), and also along with other papers on the topic in Putnam, Mathematics, Matter and Method (1979)
  3. 1 2 3 4 Horst, Steven, (2005) "The Computational Theory of Mind" in The Stanford Encyclopedia of Philosophy
  4. 1 2 Pinker, Steven. The Blank Slate. New York: Penguin. 2002
  5. Hobbes, Thomas "De Corpore"
  6. Searle, J.R. (1980), "Minds, brains, and programs" (PDF), The Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756, S2CID   55303721
  7. Searle, J.R. (1992), The Rediscovery of the Mind
  8. 1 2 3 Putnam, H. (1988). Representation and Reality. Cambridge, Massachusetts: MIT Press. ISBN   978-0-262-66074-7. OCLC   951364040.
  9. Chalmers, D.J. (1996), "Does a rock implement every finite-state automaton?", Synthese, 108 (3): 309–333, CiteSeerX   10.1.1.33.5266 , doi:10.1007/BF00413692, S2CID   17751467, archived from the original on 2004-08-20, retrieved 2009-05-27
  10. Edelman, Shimon (2008), "On the Nature of Minds, or: Truth and Consequences" (PDF), Journal of Experimental and Theoretical AI, 20 (3): 181–196, CiteSeerX   10.1.1.140.2280 , doi:10.1080/09528130802319086, S2CID   754826 , retrieved 2009-06-12
  11. Blackmon, James (2012). "Searle's Wall". Erkenntnis. 78: 109–117. doi:10.1007/s10670-012-9405-4. S2CID   121512443.
  12. Roger Penrose, "Mathematical Intelligence," in Jean Khalfa, editor, What is Intelligence?, chapter 5, pages 107-136. Cambridge University Press, Cambridge, United Kingdom, 1994
  13. Ullian, Joseph S. (March 1971). "Hilary Putnam. Minds and machines. Minds and machines, edited by Alan Ross Anderson, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1964, pp. 72–97. (Reprinted from Dimensions of mind, A symposium, edited by Sidney Hook, New York University Press, New York 1960, pp. 148–179.)". Journal of Symbolic Logic. 36 (1): 177. doi:10.2307/2271581. ISSN   0022-4812. JSTOR   2271581.
  14. Smythies, J. R. (November 1993). "The Rediscovery of the Mind. By J. R. Searle. (Pp. 286; $22.50.) MIT Press: Cambridge, Mass.1992". Psychological Medicine. 23 (4): 1043–1046. doi:10.1017/s0033291700026507. ISSN   0033-2917. S2CID   143359028.
  15. "ART, MIND, and RELIGION". Philosophical Books. 8 (3): 32. October 1967. doi:10.1111/j.1468-0149.1967.tb02995.x. ISSN   0031-8051.
  16. Piccinini, Gualtiero (2015-06-01), "The Mechanistic Account", Physical Computation, Oxford University Press, pp. 118–151, doi:10.1093/acprof:oso/9780199658855.003.0008, ISBN   978-0-19-965885-5 , retrieved 2020-12-12
  17. 1 2 Piccinini, Gualtiero (2017), "Computation in Physical Systems", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2017 ed.), Metaphysics Research Lab, Stanford University, retrieved 2020-12-12
  18. Piccinini, Gualtiero (October 2007). "Computing Mechanisms*". Philosophy of Science. 74 (4): 501–526. doi:10.1086/522851. ISSN   0031-8248. S2CID   12172712.

Further reading