Physical symbol system

Last updated

A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions.

Contents

The physical symbol system hypothesis (PSSH) is a position in the philosophy of artificial intelligence formulated by Allen Newell and Herbert A. Simon. They wrote:

"A physical symbol system has the necessary and sufficient means for general intelligent action." [1]

This claim implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). [2]

The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). [3] The latest version is called the computational theory of mind, associated with philosophers Hilary Putnam and Jerry Fodor. [4]

Examples

Examples of physical symbol systems include:

The physical symbol system hypothesis claims that both of these are also examples of physical symbol systems:

Evidence for the hypothesis

Two lines of evidence suggested to Allen Newell and Herbert A. Simon that "symbol manipulation" was the essence of both human and machine intelligence: psychological experiments on human beings and the development of artificial intelligence programs.

Psychological experiments and computer models

Newell and Simon carried out psychological experiments that showed that, for difficult problems in logic, planning or any kind of "puzzle solving", people carefully proceeded step-by-step, considering several different possible ways forward, selected the most promising one, backing up when the possibility hit a dead end. Each possible solution was visualized with symbols, such as words, numbers or diagrams. This was "symbol manipulation" -- the people were iteratively exploring a formal system looking for a matching pattern that solved the puzzle. [5] [6] [7] Newell and Simon were able to simulate the step by step problem solving skills of people with computer programs; they created programs that used the same algorithms as people and were able to solve the same problems.

This type of research, using both experimental psychology and computer models, was called "cognitive simulation" by Hubert Dreyfus. [8] Their work was profoundly influential: it contributed to the cognitive revolution of the 1960s, the founding of the field of cognitive science and cognitivism in psychology.

This line of research suggested that human problem solving consisted primarily of the manipulation of high-level symbols.

Artificial intelligence programs in the 1950s and 60s

In the early decades of AI research there were many very successful programs that used high-level symbol processing. These programs were very successful, demonstrating skills that many people at the time had assumed were impossible for machines, such as solving algebra word problems (STUDENT), proving theorems in logic (Logic Theorist), learning to play competitive checkers (Arthur Samuel's checkers), and communicating in natural language (ELIZA, SHRDLU). [9] [10] [11]

The success of these programs suggested that symbol processing systems could simulate any intelligent action.

Clarifications

The physical symbol systems hypothesis becomes trivial, incoherent or irrelevant unless we recognize a distinction between "digitized signals" and "symbols", between "narrow" AI and general intelligence and between consciousness and intelligent behavior.

Semantic symbols vs. dynamic signals

The physical symbol system hypothesis is only interesting if we restrict the "symbols" to things that have a recognizable meaning or denotation and can be composed with other symbols to create more complex symbols, like <dog> and <tail>. It doesn't apply to the simple abstract 0s and 1s in the memory of a digital computer or the stream of 0s and 1s passing through the perceptual apparatus of a robot. It also doesn't apply to matrixes of unidentified numbers, such as those used in neural networks or support vector machines. These may technically be symbols, but it is not always possible to determine exactly what the symbols are standing for. This is not what Newell and Simon had in mind, and the argument becomes trivial if we include them.

David Touretzky and Dean Pomerleau consider what would follow if we interpret the "symbols" in the PSSH to be binary digits of digital hardware. In this version of the hypothesis, no distinction is being made between "symbols" and "signals". Here the physical symbol system hypothesis asserts merely that intelligence can be digitized. This is a weaker claim. Indeed, Touretzky and Pomerleau write that if symbols and signals are the same thing, then "[s]ufficiency is a given, unless one is a dualist or some other sort of mystic, because physical symbol systems are Turing-universal." [12] The widely accepted Church–Turing thesis holds that any Turing-universal system can simulate any conceivable process that can be digitized, given enough time and memory. Since any digital computer is Turing-universal, any digital computer can, in theory, simulate anything that can be digitized to a sufficient level of precision, including the behavior of intelligent organisms. The necessary condition of the physical symbol systems hypothesis can likewise be finessed, since we are willing to accept almost any signal as a form of "symbol" and all intelligent biological systems have signal pathways. [12]

The same issue applies to the unidentified numbers that appear in the matrixes of a neural network or a support vector machine. These programs are using the same mathematics as a digitial simulation of a dynamical system, and is better understood as "dynamic system" than a "physical symbol system". Nils Nilsson wrote: "any physical process can be simulated to any desired degree of accuracy on a symbol-manipulating computer, but an account of such a simulation in terms of symbols, instead of signals, can be unmanageably cumbersome." [13]

General intelligence vs. "narrow" intelligence

The PSSH refers to "general intelligent action" -- that is, to every activity that we would consider "intelligent". Thus it is the claim that artificial general intelligence can be achieved using only symbolic methods. It does not refer to "narrow" applications.

Artificial intelligence research has succeeded in developing many programs that are capable of intelligently solving particular problems. However, AI research has so far not been able to produce a system with artificial general intelligence -- the ability to solve a variety of novel problems, as human do. Thus, the criticism of the PSSH refers to the limits of AI in the future, and does not apply to any current research or programs.

Consciousness vs. intelligent action

The PSSH refers to "intelligent action" -- that is, the behavior of the machine -- it does not refer to the "mental states", "mind", "consciousness", or the "experiences" of the machine. "Consciousness", as far as neurology can determine, is not something that can deduced from the behavior of an agent: it is always possible that the machine is simulating the experience of consciousness, without actually experiencing it, similar to the way a perfectly written fictional character might simulate a person with consciousness.

Thus, the PSSH is not relevant to positions which refer to "mind" or "consciousness", such as John Searle's Strong AI hypothesis:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. [14] [15]

Evidence against the hypothesis

Nils Nilsson has identified four main "themes" or grounds in which the physical symbol system hypothesis has been attacked. [16]

  1. The "erroneous claim that the [physical symbol system hypothesis] lacks symbol grounding" which is presumed to be a requirement for general intelligent action.
  2. The common belief that AI requires non-symbolic processing (that which can be supplied by a connectionist architecture for instance).
  3. The common statement that the brain is simply not a computer and that "computation as it is currently understood, does not provide an appropriate model for intelligence".
  4. And last of all that it is also believed in by some that the brain is essentially mindless, most of what takes place are chemical reactions and that human intelligent behaviour is analogous to the intelligent behaviour displayed for example by ant colonies.

Evidence the brain does not always use symbols

If the human brain does not use symbolic reasoning to create intelligent behavior, then the necessary side of the hypothesis is false, and human intelligence is the counter-example.

Dreyfus

Hubert Dreyfus attacked the necessary condition of the physical symbol system hypothesis, calling it "the psychological assumption" and defining it thus:

  • The mind can be viewed as a device operating on bits of information according to formal rules. [17]

Dreyfus refuted this by showing that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation. Experts solve problems quickly by using their intuitions, rather than step-by-step trial and error searches. Dreyfus argued that these unconscious skills would never be captured in formal rules. [18]

Tversky and Kahnemann

Embodied cognition

George Lakoff, Mark Turner and others have argued that our abstract skills in areas such as mathematics, ethics and philosophy depend on unconscious skills that derive from the body, and that conscious symbol manipulation is only a small part of our intelligence.[ citation needed ]

Evidence that symbolic AI can't efficiently generate intelligence for all problems

It is impossible to prove that symbolic AI will never produce general intelligence, but if we can not find an efficient way to solve particular problems with symbolic AI, this is evidence that the sufficient side of the PSSH is unlikely to be true.

Intractability

Common sense knowledge, frame, qualification and ramification problems

Moravec's paradox

Evidence that sub-symbolic or neurosymbolic AI programs can generate intelligence

If sub-symbolic AI programs, such as deep learning, can intelligently solve problems, then this is evidence that the necessary side of the PSSH is false.

If hybrid approaches that combine symbolic AI with other approaches can efficiently solve a wider range of problems than either technique alone, this is evidence that the necessary side is true and the sufficiency side is false.

Brooks

Rodney Brooks of MIT was able to build robots that had superior ability to move and survive without the use of symbolic reasoning at all. Brooks (and others, such as Hans Moravec) discovered that our most basic skills of motion, survival, perception, balance and so on did not seem to require high-level symbols at all, that in fact, the use of high-level symbols was more complicated and less successful.

In a 1990 paper Elephants Don't Play Chess, robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough." [19]

Connectionism and deep learning

In 2012 AlexNet, a deep learning network, outperformed all other programs in classifying images on ImageNet by a substantial margin. In the years since, deep learning has proved to be much more successful in many domains than symbolic AI.[ citation needed ]

Hybrid AI

Symbol grounding

See also

Notes

  1. Newell & Simon 1976, p. 116 and Russell & Norvig 2003, p. 18
  2. Nilsson 2007, p. 1.
  3. Dreyfus 1979 , p. 156, Haugeland , pp. 15–44
  4. Horst 2005
  5. Newell, Shaw & Simon 1958.
  6. McCorduck 2004, pp. 450–451.
  7. Crevier 1993, pp. 258–263.
  8. Dreyfus 1979, pp. 130–148.
  9. McCorduck 2004, pp. 243–252.
  10. Crevier 1993, pp. 52–107.
  11. Russell & Norvig 2021, pp. 19–21.
  12. 1 2 Reconstructing Physical Symbol Systems David S. Touretzky and Dean A. Pomerleau Computer Science Department Carnegie Mellon University Cognitive Science 18(2):345353, 1994. https://www.cs.cmu.edu/~dst/pubs/simon-reply-www.ps.gz
  13. Nilsson 2007, p. 10.
  14. Searle 1999, p. [ page needed ].
  15. Dennett 1991, p. 435.
  16. Nilsson, p. 1.
  17. Dreyfus 1979 , p. 156
  18. Dreyfus 1972, Dreyfus 1979, Dreyfus & Dreyfus 1986. See also Crevier 1993 , pp. 120–132 and Hearn 2007 , pp. 50–51
  19. Brooks 1990 , p. 3

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines such as computer systems, as opposed to the natural intelligence of living beings. As a field of research in computer science focusing on the automation of intelligent behavior through techniques such as machine learning, it develops and studies methods and software which enable machines to perform tasks that are typically associated with human intelligence. Such machines may be called AIs.

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. Philosopher John Searle presented the argument in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978) presented similar arguments. Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Allen Newell</span> American cognitive scientist

Allen Newell was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theorist (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their contributions to artificial intelligence and the psychology of human cognition.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.

"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.

Movements in cognitive science are considered to be post-cognitivist if they are opposed to or move beyond the cognitivist theories posited by Noam Chomsky, Jerry Fodor, David Marr, and others.

General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, the GPS works with means–ends analysis.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Means–ends analysis (MEA) is a problem solving technique used commonly in artificial intelligence (AI) for limiting search in AI programs.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.

Deliberative agent is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions are made via symbolic reasoning".

In the philosophy of artificial intelligence, GOFAI is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.

References