Symbol grounding problem

Last updated

The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words (symbols in general) get their meanings, [1] and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.

Contents

Definitions

The symbol grounding problem

According to his 1990 paper, Stevan Harnad implicitly expresses a few other definitions of the symbol grounding problem: [2]

  1. The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..."
  2. The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols."
  3. "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in Searle's (1980) celebrated 'Chinese Room Argument'"
  4. The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..."

To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols". [2]

Symbol system

According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens." [2]

Formality of symbols

As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument, [3] the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs", whereby the Chinese Room argument is described in that article:

[...] all that 'formal' means here is that I can identify the symbols entirely by their shapes. [4]

Background

Referents

A referent is the thing that a word or phrase refers to as distinguished from the word's meaning. [5] This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.

Referential process

In the 19th century, philosopher Charles Sanders Peirce suggested what some[ who? ] think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called semiosis. [6] Some [ who? ] have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes. [7] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem. [8]

Grounding process

There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded".[ citation needed ] That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer. [9] [10]

Requirements for symbol grounding

Another symbol system is natural language. [11] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested two properties that might be required to make this difference:[ citation needed ]

  1. Capacity to pick referents
  2. Consciousness

Capacity to pick out referents

One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property.

To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.

The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts. [12]

Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by Philip N. Johnson-Laird [13] and expanded by William A. Woods in "Meaning and Links". [14] A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done."

Consciousness

The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing test, which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor. [15] [16] Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance and for Categorical perception). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain.

To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names [17] According to Harnad, ultimately grounding has to be sensorimotor, to avoid infinite regress. [18]

Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point. [19] [20]

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

<span class="mw-page-title-main">Consciousness</span> Awareness of existence

Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. Philosopher John Searle presented the argument in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978) presented similar arguments. Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Concept</span> Mental representation or an abstract object

A concept is defined as an abstract idea. It is understood to be a fundamental building block underlying principles, thoughts and beliefs. Concepts play an important role in all aspects of cognition. As such, concepts are studied within such disciplines as linguistics, psychology, and philosophy, and these disciplines are interested in the logical and psychological structure of concepts, and how they are put together to form thoughts and sentences. The study of concepts has served as an important flagship of an emerging interdisciplinary approach, cognitive science.

In the philosophy of language, a proper name – examples include a name of a specific person or place – is a name which ordinarily is taken to uniquely identify its referent in the world. As such it presents particular challenges for theories of meaning, and it has become a central problem in analytic philosophy. The common-sense view was originally formulated by John Stuart Mill in A System of Logic (1843), where he defines it as "a word that answers the purpose of showing what thing it is that we are talking about but not of telling anything about it". This view was criticized when philosophers applied principles of formal logic to linguistic propositions. Gottlob Frege pointed out that proper names may apply to imaginary or nonexistent entities, without becoming meaningless, and he showed that sometimes more than one proper name may identify the same entity without having the same sense, so that the phrase "Homer believed the morning star was the evening star" could be meaningful and not tautological in spite of the fact that the morning star and the evening star identifies the same referent. This example became known as Frege's puzzle and is a central issue in the theory of proper names.

<span class="mw-page-title-main">Stevan Harnad</span> Canadian cognitive scientist (born 1945)

Stevan Robert Harnad is a Canadian cognitive scientist based in Montreal.

In the philosophy of mind, functionalism is the thesis that each and every mental state is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.

In linguistics and philosophy, the denotation of an expression is its literal meaning. For instance, the English word "warm" denotes the property of having high temperature. Denotation is contrasted with other aspects of meaning including connotation. For instance, the word "warm" may evoke calmness or coziness, but these associations are not part of the word's denotation. Similarly, an expression's denotation is separate from pragmatic inferences it may trigger. For instance, describing something as "warm" often implicates that it is not hot, but this is once again not part of the word's denotation.

"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.

A referent is a person or thing to which a name – a linguistic expression or other symbol – refers. For example, in the sentence Mary saw me, the referent of the word Mary is the particular person called Mary who is being spoken of, while the referent of the word me is the person uttering the sentence.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. It was vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others.

In philosophy—more specifically, in its sub-fields semantics, semiotics, philosophy of language, metaphysics, and metasemantics—meaning "is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify".

Categorical perception is a phenomenon of perception of distinct categories when there is a gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.

In analytic philosophy, philosophy of language investigates the nature of language and the relations between language, language users, and the world. Investigations may include inquiry into the nature of meaning, intentionality, reference, the constitution of sentences, concepts, learning, and thought.

Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.

<span class="mw-page-title-main">Embodied cognition</span> Interdisciplinary theory

Embodied cognition is the concept suggesting that many features of cognition are shaped by the state and capacities of the organism. The cognitive features include a wide spectrum of cognitive functions, such as perception biases, memory recall, comprehension and high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built the functional structure of organism's brain and body.

References

  1. Vogt, Paul. "Language evolution and robotics: issues on symbol grounding and language acquisition." Artificial cognition systems. IGI Global, 2007. 176–209.
  2. 1 2 3 Harnad 1990.
  3. Harnad 2001a.
  4. Searle 1980.
  5. Frege 1952.
  6. Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.
  7. Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223
  8. C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)
  9. This is the causal, contextual theory of reference that Ogden & Richards packed in The Meaning of Meaning (1923).
  10. Cf. semantic externalism as claimed in "The Meaning of 'Meaning'" of Mind, Language and Reality (1975) by Putnam who argues: "Meanings just ain't in the head." Now he and Dummett seem to favor anti-realism in favor of intuitionism, psychologism, constructivism and contextualism.
  11. Fodor 1975.
  12. Cangelosi & Harnad 2001.
  13. Philip N. Johnson-Laird "Procedural Semantics" (Cognition, 5 (1977) 189; see http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/procedural.pdf)
  14. William A. Woods. "Meaning and Links" (AI Magazine Volume 28 Number 4 (2007); see http://www.aaai.org/ojs/index.php/aimagazine/article/view/2069/2056)
  15. Harnad 2000.
  16. Harnad 2007.
  17. Blondin-Massé 2008.
  18. Harnad 2005.
  19. Harnad 2001b.
  20. Harnad 2003.

Works cited

Further reading