This article needs additional citations for verification .(February 2017) |
Author | Marvin Minsky |
---|---|
Publisher | Simon & Schuster |
Publication date | 1986 |
ISBN | 0-671-60740-5 |
The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky. [1]
In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents , which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title. [2]
The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version.
In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such as language, memory, and learning work, and also covers concepts such as consciousness, the sense of self, and free will; because of this, many view The Society of Mind as a work of philosophy.
The book was not written to prove anything specific about AI or cognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level.
Minsky first started developing the theory with Seymour Papert in the early 1970s. Minsky said that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks. [3]
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.
This idea is perhaps best summarized by the following quote:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308
Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include perception, memory, attention, reasoning, language, and emotion; to understand these faculties, cognitive scientists borrow from fields such as psychology, artificial intelligence, philosophy, neuroscience, linguistics and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision-making to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
Marvin Lee Minsky was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology's AI laboratory and wrote several texts concerning AI and philosophy.
Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.
Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks.
In the history of artificial intelligence (AI), neat and scruffy are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.
Awareness, in philosophy and psychology, is a perception or knowledge of something. The concept is often synonymous with consciousness. However, one can be aware of something without being explicitly conscious of it, such as in the case of blindsight.
Emergentism is the belief in emergence, particularly as it involves consciousness and the philosophy of mind. A property of a system is said to be emergent if it is a new outcome of some other properties of the system and their interaction, while it is itself different from them. Within the philosophy of science, emergentism is analyzed both as it contrasts with and parallels reductionism. This philosophical theory suggests that higher-level properties and phenomena arise from the interactions and organization of lower-level entities yet are not reducible to these simpler components. It emphasizes the idea that the whole is more than the sum of its parts.
A K-line, or Knowledge-line, is a mental agent which represents an association of a group of other mental agents found active when a subject solves a certain problem or formulates a new idea. These were first described in Marvin Minsky's essay K-lines: A Theory of Memory, published in 1980 in the journal Cognitive Science:
When you "get an idea," or "solve a problem" ... you create what we shall call a K-line. ... When that K-line is later "activated", it reactivates ... mental agencies, creating a partial mental state "resembling the original."
"Whenever you 'get a good idea', solve a problem, or have a memorable experience, you activate a K-line to 'represent' it. A K-line is a wirelike structure that attaches itself to whichever mental agents are active when you solve a problem or have a good idea.
When you activate that K-line later, the agents attached to it are aroused, putting you into a 'mental state' much like the one you were in when you solved that problem or got that idea. This should make it relatively easy for you to solve new, similar problems!"
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
Bicameral mentality is a hypothesis introduced by Julian Jaynes who argued human ancestors as late as the ancient Greeks did not consider emotions and desires as stemming from their own minds but as the consequences of actions of gods external to themselves. The theory posits that the human mind once operated in a state in which cognitive functions were divided between one part of the brain that appears to be "speaking" and a second part that listens and obeys—a bicameral mind—and that the breakdown of this division gave rise to consciousness in humans. The term was coined by Jaynes, who presented the idea in his 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind, wherein he makes the case that a bicameral mentality was the normal and ubiquitous state of the human mind as recently as 3,000 years ago, near the end of the Mediterranean bronze age.
The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, linguistics, computer science, anthropology, neuroscience, and philosophy. The approaches used were developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. In the 1960s, the Harvard Center for Cognitive Studies and the Center for Human Information Processing at the University of California, San Diego were influential in developing the academic study of cognitive science. By the early 1970s, the cognitive movement had surpassed behaviorism as a psychological paradigm. Furthermore, by the early 1980s the cognitive approach had become the dominant line of research inquiry across most branches in the field of psychology.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. It is closely related to functionalism, a broader theory that defines mental states by what they do rather than what they're made of.
Embodied cognitive science is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity; the formation of a common set of general principles of intelligent behavior; and the experimental use of robotic agents in controlled environments.
The philosophy of mind is a branch of philosophy that deals with the nature of the mind and its relation to the body and the external world.
Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198). "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.
The following outline is provided as an overview of and topical guide to thought (thinking):
The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is a 2006 book by cognitive scientist Marvin Minsky that elaborates and expands on Minsky's ideas as presented in his earlier book Society of Mind (1986).
Externalism is a group of positions in the philosophy of mind which argues that the conscious mind is not only the result of what is going on inside the nervous system, but also what occurs or exists outside the subject. It is contrasted with internalism which holds that the mind emerges from neural activity alone. Externalism is a belief that the mind is not just the brain or functions of the brain.
The left-brain interpreter is a neuropsychological concept developed by the psychologist Michael S. Gazzaniga and the neuroscientist Joseph E. LeDoux. It refers to the construction of explanations by the left brain hemisphere in order to make sense of the world by reconciling new information with what was known before. The left-brain interpreter attempts to rationalize, reason and generalize new information it receives in order to relate the past to the present.
The Penrose–Lucas argument is a logical argument partially based on a theory developed by mathematician and logician Kurt Gödel. In 1931, he proved that every effectively generated theory capable of proving basic arithmetic either fails to be consistent or fails to be complete. Due to human ability to see the truth of formal systems' Gödel sentences, it is argued that the human mind cannot be computed on a Turing Machine that works on Peano arithmetic because the latter can't see the truth value of its Gödel sentence, while human minds can. Mathematician Roger Penrose modified the argument in his first book on consciousness, The Emperor's New Mind (1989), where he used it to provide the basis of his theory of consciousness: orchestrated objective reduction.