Movements in cognitive science are considered to be post-cognitivist if they are opposed to or move beyond the cognitivist theories posited by Noam Chomsky, Jerry Fodor, David Marr, and others.
Postcognitivists challenge tenets within cognitivism, including ontological dualism, representational realism, that cognition is independent of processes outside the mind and nervous system, that the electronic computer is an appropriate analogy for the mind, and that cognition occurs only within individuals. [1]
Researchers who have followed post-cognitive directions include James J. Gibson, Hubert Dreyfus, Gregory Bateson, Michael Turvey, Bradd Shore, Jerome Bruner, Vittorio Guidano, Humberto Maturana and Francisco Varela. [2]
This article's "criticism" or "controversy" section may compromise the article's neutrality .(April 2024) |
Using the principles of Martin Heidegger's philosophy, Dreyfus has been critical of cognitivism from the beginning. Despite continued resistance by old-school philosophers of cognition, he felt vindicated by the growth of new approaches. When Dreyfus' ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility. [3] [4] By the 1980s, however, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called "sub-symbolic" because they eschew early artificial intelligence (AI) research's emphasis on high level symbols. Historian and AI researcher Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments." [5] Dreyfus said in 2007 "I figure I won and it's over—they've given up." [6]
In Mind Over Machine (1986), written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can't Do, where he had made a similar argument criticizing the "cognitive simulation" school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s. [ citation needed ]
Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between "knowing-that" and "knowing-how", based on Heidegger's distinction of present-at-hand and ready-to-hand. [7]
Knowing-that is our conscious, step-by-step problem-solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at a time. At moments like this, the ideas become very precise and simple: they become context-free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls "knowing-that". [ citation needed ]
Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work, or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply "size up the situation" and react. [ citation needed ]
The human sense of the situation, according to Dreyfus, is based on our goals, our bodies, and our culture—all of our unconscious intuitions, attitudes, and knowledge about the world. This "context" or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively in some way. It affects what we notice and what we do not, what we expect, and what possibilities we do not consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our "fringe consciousness" (borrowing a phrase from William James): the millions of things we are aware of but are not really thinking about right now. [ citation needed ]
Dreyfus did not believe that AI programs, as they were implemented in the 1970s and 1980s, could capture this "background" or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in "tree climbing with one's eyes on the moon." [8]
Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
Non-cognitivism is the meta-ethical view that ethical sentences do not express propositions and thus cannot be true or false. A noncognitivist denies the cognitivist claim that "moral judgments are capable of being objectively true, because they describe some feature of the world". If moral statements cannot be true, and if one cannot know something that is not true, noncognitivism implies that moral knowledge is impossible.
Cognition is the "mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, thought, imagination, intelligence, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem-solving and decision-making, comprehension and production of language. Cognitive processes use existing knowledge to discover new knowledge.
Cognitivism is the meta-ethical view that ethical sentences express propositions and can therefore be true or false, which noncognitivists deny. Cognitivism is so broad a thesis that it encompasses moral realism, ethical subjectivism, and error theory.
In psychology, cognitivism is a theoretical framework for understanding the mind that gained credence in the 1950s. The movement was a response to behaviorism, which cognitivists said neglected to explain cognition. Cognitive psychology derived its name from the Latin cognoscere, referring to knowing and information, thus cognitive psychology is an information-processing psychology derived in part from earlier traditions of the investigation of thought and problem solving.
In cognitive psychology, information processing is an approach to the goal of understanding human thinking that treats cognition as essentially computational in nature, with the mind being the software and the brain being the hardware. It arose in the 1940s and 1950s, after World War II. The information processing approach in psychology is closely allied to the computational theory of mind in philosophy; it is also related to cognitivism in psychology and functionalism in philosophy.
In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds, ranging from marginally smarter than the upper limits of human-level intelligence to vastly exceeding human cognitive capabilities. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
Hubert Lederer Dreyfus was an American philosopher and a professor of philosophy at the University of California, Berkeley. His main interests included phenomenology, existentialism and the philosophy of both psychology and literature, as well as the philosophical implications of artificial intelligence. He was widely known for his exegesis of Martin Heidegger, which critics labeled "Dreydegger".
Situated cognition is a theory that posits that knowing is inseparable from doing by arguing that all knowledge is situated in activity bound to social, cultural and physical contexts.
Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. Modern AI concepts were later developed by philosophers who attempted to describe human thought as a mechanical manipulation of symbols. This philosophical work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198). "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.
Embodied embedded cognition (EEC) is a philosophical theoretical position in cognitive science, closely related to situated cognition, embodied cognition, embodied cognitive science and dynamical systems theory. The theory states that intelligent behaviour emerges from the interplay between brain, body and world. The world is not just the 'play-ground' on which the brain is acting. Rather, brain, body and world are equally important factors in the explanation of how particular intelligent behaviours come about in practice.
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.
Embodied cognition is the concept suggesting that many features of cognition are shaped by the state and capacities of the organism. The cognitive features include a wide spectrum of cognitive functions, such as perception biases, memory recall, comprehension and high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built the functional structure of organism's brain and body.
In the philosophy of artificial intelligence, GOFAI is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.