Part of a series on |
Artificial intelligence |
---|
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science [1] that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. [2] [3] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. [4] These factors contributed to the emergence of the philosophy of artificial intelligence.
The philosophy of artificial intelligence attempts to answer such questions as follows: [5]
Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Important propositions in the philosophy of AI include some of the following:
Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers, evoking the question: does it matter whether a machine is really thinking, as a person thinks, rather than just producing outcomes that appear to result from thinking? [11]
The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:
Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.
It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal, [12] essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge [13] eliminates the need for a precise description altogether.
The first step to answering the question is to clearly define "intelligence".
Alan Turing [15] reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. [6] Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". [16] Turing's test extends this polite convention to machines:
One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'". [17]
Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as "the computational part of the ability to achieve goals in the world." [18]
Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. [19]
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes. [21] They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence. [22]
Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". [23] This argument, first introduced as early as 1943 [24] and vividly described by Hans Moravec in 1988, [25] is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. [26] A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005, [27] and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.
Even AI's harshest critics (such as Hubert Dreyfus and John Searle) agree that a brain simulation is possible in theory. [lower-alpha 1] However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. [30] Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering. [31]
In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). [32] Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":
The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level—symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.
These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.
In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.)[ citation needed ] More speculatively, Gödel conjectured that the human mind can eventually correctly determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism . [34] Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. [35]
Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement) [ citation needed ]. This is probably impossible for a Turing machine to do (see Halting problem); therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device.
However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. [36] [37] [38] This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence : "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis." [39]
Stuart Russell and Peter Norvig agree that Gödel's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person. [40]
Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". [41] But, of course, the Epimenides paradox applies to anything that makes statements, whether it is a machine or a human, even Lucas himself. Consider:
This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless. [43]
After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. [ citation needed ][ clarification needed ]. By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind.[ citation needed ] Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. [44] However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing. [45]
Hubert Dreyfus argued that human intelligence and expertise depended primarily on fast intuitive judgements rather than step-by-step symbolic manipulation, and argued that these skills would never be captured in formal rules. [46]
Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." [47] Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'" [48]
Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. [49] The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. [50] Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our intuitive reasoning. [49]
Cognitive science and psychology eventually came to agree with Dreyfus' description of human expertise. Daniel Kahnemann and others developed a similar theory where they identified two "systems" that humans use to solve problems, which he called "System 1" (fast intuitive judgements) and "System 2" (slow deliberate step by step thinking). [51]
Although Dreyfus' views have been vindicated in many ways, the work in cognitive science and in AI was in response to specific problems in those fields and was not directly influenced by Dreyfus. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier." [52]
This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":
Searle distinguished this position from what he called "weak AI":
Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered. [9]
Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." [53] Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." [54]
There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence". (See artificial consciousness.)
Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".
The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience", "self-awareness" or "ghost"—as in the Ghost in the Shell manga and anime series—to describe this essential human property). For others [ who? ], the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.
For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we see something, know something, mean something or understand something. [55] "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. [56] What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?
Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem". [57] A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person? [58]
Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. [59] The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?
John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly are not aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind. [60]
Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." [61] He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds." [62]
Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. [63] In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". [64] Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.
Responses to the Chinese room emphasize several different points.
The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program (software) and a computer (hardware). The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). [71] The latest version is associated with philosophers Hilary Putnam and Jerry Fodor. [72]
This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):
In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):
This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad). [73]
If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". [74] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." [74] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species." [75]
"Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger. [76]
Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. [77] He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. [78] It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned. [79]
In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. [80] Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.
This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form. [53]
The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction.
One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity". [81] He suggests that it may be somewhat or possibly very dangerous for humans. [82] This is discussed by a philosophy called Singularitarianism.
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. [81]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. [83] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. [84] [85]
The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. [86] They point to programs like the Language Acquisition Device which can emulate human interaction.
Some have suggested a need to build "Friendly AI", a term coined by Eliezer Yudkowsky, meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. [87]
Turing said "It is customary ... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set." [88]
Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new. [76]
Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." [76] All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.
Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes:
In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates. [89]
The discussion on the topic has been reignited as a result of recent claims made by Google's LaMDA artificial intelligence system that it is sentient and had a "soul". [90]
LaMDA (Language Model for Dialogue Applications) is an artificial intelligence system that creates chatbots—AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and using algorithms to respond to queries in the most fluid and natural way possible.
The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of emotions, generating Aesop-style fables on the moment, and even describing its alleged fears. [91] Pretty much all philosophers doubt LaMDA's sentience. [92]
Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy , some philosophers argue that the role of philosophy in AI is underappreciated. [4] Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress. [93]
The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller.
The main bibliography on the subject, with several sub-sections, is on PhilPapers.
A recent survey for Philosophy of AI is Müller (2023). [3]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.
The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999, by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.
In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.
"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. AGI is considered one of the definitions of strong AI.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
Synthetic intelligence (SI) is an alternative/opposite term for artificial intelligence emphasizing that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence. John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond. Synthetic means that which is produced by synthesis, combining parts to form a whole; colloquially, a human-made version of that which has arisen naturally. A "synthetic intelligence" would therefore be or appear human-made, but not a simulation.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words get their meanings, and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. It is closely related to functionalism, a broader theory that defines mental states by what they do rather than what they're made of.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.
In the philosophy of artificial intelligence, GOFAI is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.
The Penrose–Lucas argument is a logical argument partially based on a theory developed by mathematician and logician Kurt Gödel. In 1931, he proved that every effectively generated theory capable of proving basic arithmetic either fails to be consistent or fails to be complete. Due to human ability to see the truth of formal system's Gödel sentences, it is argued that the human mind cannot be computed on a Turing Machine that works on Peano arithmetic because the latter can't see the truth value of its Gödel sentence, while human minds can. Mathematician Roger Penrose modified the argument in his first book on consciousness, The Emperor's New Mind (1989), where he used it to provide the basis of his theory of consciousness: orchestrated objective reduction.
These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.
...even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.