Language of thought hypothesis

Last updated

The language of thought hypothesis (LOTH), [1] sometimes known as thought ordered mental expression (TOME), [2] is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure (sometimes known as mentalese). On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.

Linguistics is the scientific study of language. It involves analysing language form, language meaning, and language in context. The earliest activities in the documentation and description of language have been attributed to the 6th-century-BC Indian grammarian Pāṇini who wrote a formal description of the Sanskrit language in his Aṣṭādhyāyī.

Philosophy of mind branch of philosophy on the nature of the mind

Philosophy of mind is a branch of philosophy that studies the ontology, nature, and relationship of the mind to the body. The mind–body problem is a paradigm issue in philosophy of mind, although other issues are addressed, such as the hard problem of consciousness, and the nature of particular mental states. Aspects of the mind that are studied include mental events, mental functions, mental properties, consciousness, the ontology of the mind, the nature of thought, and the relationship of the mind to the body.

Cognitive science interdisciplinary scientific study of the mind and its processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. The fundamental concept of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."


Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is "tokened" by a linguistic or semantic structure and operated upon by means of a combinatorial syntax. [1] Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

A mental representation, in philosophy of mind, cognitive psychology, neuroscience, and cognitive science, is a hypothetical internal cognitive symbol that represents external reality, or else a mental process that makes use of such a symbol: "a formal system for making explicit certain entities or types of information, together with a specification of how the system does this".

Cognition is "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses many aspects of intellectual functions and processes such as attention, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and "computation", problem solving and decision making, comprehension and production of language. Cognitive processes use existing knowledge and generate new knowledge.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. The LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate. [3] [4] [5]

Human brain main organ of the human nervous system

The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and protected by, the skull bones of the head.

Eliminative materialism Philosophical view that states-of-mind as commonly understood do not exist

Eliminative materialism is the claim that people's common-sense understanding of the mind is false and that certain classes of mental states that most people believe in do not exist. It is a materialist position in the philosophy of mind. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. Rather, they argue that psychological concepts of behaviour and experience should be judged by how well they reduce to the biological level. Other versions entail the non-existence of conscious mental states such as pain and visual perceptions.


Connectionism is an approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks (ANN).


The hypothesis applies to thoughts that have propositional content, and is not meant to describe everything that goes on in the mind. It appeals to the representational theory of thought to explain what those tokens actually are and how they behave. There must be a mental representation that stands in some unique relationship with the subject of the representation and has specific content. Complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. Thoughts can only relate to each other in ways that do not violate the syntax of thought. The syntax by means of which these two sub-parts are combined can be expressed in first-order predicate calculus.

The thought "John is tall" is clearly composed of two sub-parts, the concept of John and the concept of tallness, combined in a manner that may be expressed in first-order predicate calculus as a predicate 'T' ("is tall") that holds of the entity 'j' (John). A fully articulated proposal for what a LOT would have to take into account greater complexities such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might believe or see or merely suspect that John is tall).

In linguistics and grammar, a quantifier is a type of determiner, such as all, some, many, few, a lot, and no, that indicates quantity.


  1. There can be no higher cognitive processes without mental representation. The only plausible psychological models represent higher cognitive processes as representational and computational thought needs a representational system as an object upon which to compute. We must therefore attribute a representational system to organisms for cognition and thought to occur.
  2. There is causal relationship between our intentions and our actions. Because mental states are structured in a way that causes our intentions to manifest themselves by what we do, there is a connection between how we view the world and ourselves and what we do.


The language of thought hypothesis has been both controversial and groundbreaking. Some philosophers reject the LOTH, arguing that our public language is our mental language—a person who speaks English thinks in English. But others contend that complex thought is present even in those who do not possess a public language (e.g. babies, aphasics, and even higher-order primates), and therefore some form of mentalese must be innate. [ citation needed ]

Aphasia is an inability to comprehend or formulate language because of damage to specific brain regions. This damage is typically caused by a cerebral vascular accident (stroke), or head trauma; however, these are not the only possible causes. To be diagnosed with aphasia, a person's speech or language must be significantly impaired in one of the four communication modalities following acquired brain injury or have significant decline over a short time period. The four communication modalities are auditory comprehension, verbal expression, reading and writing, and functional communication.

The notion that mental states are causally efficacious diverges from behaviorists like Gilbert Ryle, who held that there is no break between the cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way, that these causal mental states are representational. An objection to this point comes from John Searle in the form of biological naturalism, a non-representational theory of mind that accepts the causal efficacy of mental states. Searle divides intentional states into low-level brain activity and high-level mental activity. The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation.[ citation needed ]

Tim Crane, in his book The Mechanical Mind, [6] states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean. [6] If the meaning of sentences is explained regarding sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers). [6] Therefore, sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress. [6]

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains, but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation. [6] John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning). [6] If LOTH cannot show that the mind knows that it is following the particular set of rules in question, then the mind is not computational because it is not governed by computational rules. [3] [6] Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act by this set of rules. [6]

Another objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Dennett points out that a chess program can have the attitude of “wanting to get its queen out early,” without having representation or rule that explicitly states this. A multiplication program on a computer computes in the computer language of 1’s and 0’s, yielding representations that do not correspond with any propositional attitude. [3]

Susan Schneider has recently developed a version of LOT that departs from Fodor's approach in numerous ways. In her book, The Language of Thought: a New Philosophical Direction, Schneider argues that Fodor's pessimism about the success of cognitive science is misguided, and she outlines an approach to LOT that integrates LOT with neuroscience. She also stresses that LOT that is not wedded to the extreme view that all concepts are innate. She fashions a new theory of mental symbols, and a related two-tiered theory of concepts, in which a concept's nature is determined by its LOT symbol type and its meaning. [4]

Relation to connectionism

Connectionism is an approach to artificial intelligence that often accepts a lot of the same theoretical framework that LOTH accepts, namely that mental states are computational and causally efficacious and very often that they are representational. However, connectionism stresses the possibility of thinking machines, most often realized as artificial neural networks, an inter-connectional set of nodes, and describes mental states as able to create memory by modifying the strength of these connections over time. Some popular types of neural networks are interpretations of units, and learning algorithm. "Units" can be interpreted as neurons or groups of neurons. A learning algorithm is such that, over time, a change in connection weight is possible, allowing networks to modify their connections. Connectionist neural networks are able to change over time via their activation. An activation is a numerical value that represents any aspect of a unit that a neural network has at any time. Activation spreading is the spreading or taking over of other over time of the activation to all other units connected to the activated unit.

Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions and objects in photographs and understanding nuanced gestures. [6] Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT.

Fodor and Zenon Pylyshyn use the notion of cognitive architecture in their defense. Cognitive architecture is the set of basic functions of an organism with representational input and output. They argue that it is a law of nature that cognitive capacities are productive, systematic and inferentially coherent—they have the ability to produce and understand sentences of a certain structure if they can understand one sentence of that structure. [7] A cognitive model must have a cognitive architecture that explains these laws and properties in some way that is compatible with the scientific method. Fodor and Pylyshyn say that cognitive architecture can only explain the property of systematicity by appealing to a system of representations and that connectionism either employs a cognitive architecture of representations or else does not. If it does, then connectionism uses LOT. If it does not then it is empirically false. [3]

Connectionists have responded to Fodor and Pylyshyn by denying that connectionism uses LOT, by denying that cognition is essentially a function that uses representational input and output or denying that systematicity is a law of nature that rests on representation.[ citation needed ]

Empirical testing

Since LOTH came to be it has been empirically tested. Not all experiments have confirmed the hypothesis;

See also

Related Research Articles

Functionalism is a viewpoint of the theory of the mind. It states that mental states are constituted solely by their functional role in, i.e. causal relations with, other mental states, sensory inputs and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

In psychology, cognitivism is a theoretical framework for understanding the mind that gained credence in the 1950s. The movement was a response to behaviorism, which cognitivists said neglected to explain cognition. Cognitive psychology derived its name from the Latin cognoscere, referring to knowing and information, thus cognitive psychology is an information-processing psychology derived in part from earlier traditions of the investigation of thought and problem solving.

Jerry Fodor American philosopher

Jerry Alan Fodor was an American philosopher and cognitive scientist. He held the position of State of New Jersey Professor of Philosophy, Emeritus, at Rutgers University and was the author of many works in the fields of philosophy of mind and cognitive science, in which he laid the groundwork for the modularity of mind and the language of thought hypotheses, among other ideas. He was known for his provocative and sometimes polemical style of argumentation and as "one of the principal philosophers of mind of the late twentieth and early twenty-first century. In addition to having exerted an enormous influence on virtually every portion of the philosophy of mind literature since 1960, Fodor's work has had a significant impact on the development of the cognitive sciences."

A mental image or mental picture is the representation in a person's mind of the physical world outside that person. It is an experience that, on most occasions, significantly resembles the experience of perceiving some object, event, or scene, but occurs when the relevant object, event, or scene is not actually present to the senses. There are sometimes episodes, particularly on falling asleep and waking up (hypnopompic), when the mental imagery, being of a rapid, phantasmagoric and involuntary character, defies perception, presenting a kaleidoscopic field, in which no distinct object can be discerned. Mental imagery can sometimes produce the same effects as would be produced by the behavior or experience imagined.

Andrew Clark, is a professor of philosophy and Chair in Logic and Metaphysics at the University of Edinburgh in Scotland. Before this, he was director of the Cognitive Science Program at Indiana University in Bloomington, Indiana and previously taught at Washington University in St. Louis, Missouri and the University of Sussex in England. Clark is one of the founding members of the CONTACT collaborative research project whose aim is to investigate the role environment plays in shaping the nature of conscious experience. Clark's papers and books deal with the philosophy of mind and he is considered a leading scientist in mind extension. He has also written extensively on connectionism, robotics and the role and nature of mental representation.

A variety of different authors, theories and fields purport influences between language and thought. Psychologists attempt to explain the emergence of thought and language in human evolution.

Dual-coding theory, a theory of cognition, was hypothesized by Allan Paivio of the University of Western Ontario in 1971. In developing this theory, Paivio used the idea that the formation of mental images aids in learning. According to Paivio, there are two ways a person could expand on learned material: verbal associations and visual imagery. Dual-coding theory postulates that both visual and verbal information is used to represent information. Visual and verbal information are processed differently and along distinct channels in the human mind, creating separate representations for information processed in each channel. The mental codes corresponding to these representations are used to organize incoming information that can be acted upon, stored, and retrieved for subsequent use. Both visual and verbal codes can be used when recalling information. For example, say a person has stored the stimulus concept "dog" as both the word 'dog' and as the image of a dog. When asked to recall the stimulus, the person can retrieve either the word or the image individually, or both simultaneously. If the word is recalled, the image of the dog is not lost and can still be retrieved at a later point in time. The ability to code a stimulus two different ways increases the chance of remembering that item compared to if the stimulus was only coded one way.

Paul Smolensky is a professor of Cognitive Science at the Johns Hopkins University.

Propositional representation is the psychological theory, first developed in 1973 by Dr. Zenon Pylyshyn, that mental relationships between objects are represented by symbols and not by mental images of the scene.

Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano’s psychology

Neurophilosophy or philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

The symbol grounding problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, hence to the problem of consciousness.

In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher and cognitive scientist Jerry Fodor in the 1960s, 1970s and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy.

Computational representational understanding of mind (CRUM) is a hypothesis in cognitive science which proposes that thinking is performed by computations operating on representations. This hypothesis assumes that the mind has mental representations analogous to data structures and computational procedures analogous to algorithms, such that computer programs using algorithms applied to data structures can model the mind and its processes.

According to Franz Brentano, intentionality refers to the "aboutness of mental states that cannot be a physical relation between a mental state and what is about because in a physical relation each of the relata must exist whereas the objects of mental states might not.

The competition model is a psycholinguistic theory of language acquisition and sentence processing, developed by Elizabeth Bates and Brian MacWhinney (1981).


  1. 1 2 "Stanford Encyclopedia of Philosophy".
  2. Tillas A. (2015-08-16). "Language as grist to the mill of cognition". Cogn Process. 16 (3): 219–243. doi:10.1007/s10339-015-0656-2. PMID   25976728.
  3. 1 2 3 4 Murat Aydede (2004-07-27). "The Language of Thought Hypothesis".
  4. 1 2 Schneider, Susan (2011). The Language of Thought: a New Direction. Boston: Mass: MIT Press.
  5. Fodor, Jerry A. (1975-01-01). The Language of Thought. Harvard University Press. ISBN   9780674510302.
  6. 1 2 3 4 5 6 7 8 9 Crane, Tim (2005). The mechanical mind : a philosophical introduction to minds, machines and mental representation (2nd, repr. ed.). London: Routledge. ISBN   978-0-415-29031-9.
  7. James Garson (2010-07-27). "Connectionism".
  8. Shepard, Roger N.; Metzler, Jacqueline (1971-02-19). "Mental Rotation of Three-Dimensional Objects". Science. 171 (3972): 701–703. CiteSeerX . doi:10.1126/science.171.3972.701. PMID   5540314.
  9. Coppola, M., & Brentari, D. (2014). From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Frontiers in Psychology, 5, 830.
  10. Downey, G. (2010, July 21). Life without language. Retrieved December 19, 2015, from
  11. Bloom, P., & Keil, F. (2001, September 1). Thinking Through Language. Retrieved December 19, 2015, from
  12. Luke Maurits. Representation, information theory and basic word order . University of Adelaide, 2011-09. Accessed 2018-08-14.