The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture, computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project. [1]
The inspiration for the program was Japan's fifth generation computer project, an enormous initiative that set aside billions for research into computing and artificial intelligence. As with Sputnik in 1957, the American government saw the Japanese project as a challenge to its technological dominance. [2] The British government also funded a program of their own around the same time, known as Alvey, and a consortium of U.S. companies funded another similar project, the Microelectronics and Computer Technology Corporation. [3] [4]
The goal of SCI, and other contemporary projects, was nothing less than full machine intelligence. "The machine envisioned by SC", according to Alex Roland and Philip Shiman, "would run ten billion instructions per second to see, hear, speak, and think like a human. The degree of integration required would rival that achieved by the human brain, the most complex instrument known to man." [5]
The initiative was conceived as an integrated program, similar to the Apollo moon program, [5] where different subsystems would be created by various companies and academic projects and eventually brought together into a single integrated system. Roland and Shiman wrote that "While most research programs entail tactics or strategy, SC boasted grand strategy, a master plan for an entire campaign." [1]
The project was funded by the Defense Advanced Research Projects Agency and directed by the Information Processing Technology Office (IPTO). By 1985 it had spent $100 million, and 92 projects were underway at 60 institutions: half in industry, half in universities and government labs. [2] Robert Kahn, who directed IPTO in those years, provided the project with its early leadership and inspiration. [6] Clint Kelly managed the SC Initiative for three years and developed many of the specific application programs for DARPA, such as the Autonomous Land Vehicle. [7]
By the late 1980s, it was clear that the project would fall short of realizing the hoped-for levels of machine intelligence. Program insiders pointed to issues with integration, organization, and communication. [8] When Jack Schwarz ascended to the leadership of IPTO in 1987, he cut funding to artificial intelligence research (the software component) "deeply and brutally", "eviscerating" the program (wrote Pamela McCorduck). [8] Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise. In his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". [8]
The project was superseded in the 1990s by the Accelerated Strategic Computing Initiative and then by the Advanced Simulation and Computing Program. These later programs did not include artificial general intelligence as a goal, but instead focused on supercomputing for large scale simulation, such as atomic bomb simulations. The Strategic Computing Initiative of the 1980s is distinct from the 2015 National Strategic Computing Initiative—the two are unrelated.
Although the program failed to meet its goal of high-level machine intelligence, [1] it did meet some of its specific technical objectives, for example those of autonomous land navigation. [9] The Autonomous Land Vehicle program and its sister Navlab project at Carnegie Mellon University, in particular, laid the scientific and technical foundation for many of the driverless vehicle programs that came after it, such as the Demo II and III programs (ALV being Demo I), Perceptor, and the DARPA Grand Challenge. [10] The use of video cameras plus laser scanners and inertial navigation units pioneered by the SCI ALV program form the basis of almost all commercial driverless car developments today. It also helped to advance the state of the art of computer hardware to a considerable degree.
On the software side, the initiative funded development of the Dynamic Analysis and Replanning Tool (DART), a program that handled logistics using artificial intelligence techniques. This was a huge success, saving the Department of Defense billions during Desert Storm. [4] Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined. [11] [12]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.
The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Advanced Research Projects Agency (ARPA), the agency was created on February 7, 1958, by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957. By collaborating with academia, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements. The name of the organization first changed from its founding name, ARPA, to DARPA, in March 1972, changing back to ARPA in February 1993, then reverted to DARPA in March 1996.
Joseph Carl Robnett Licklider, known simply as J. C. R. or "Lick", was an American psychologist and computer scientist who is considered to be among the most prominent figures in computer science development and general computing history.
Allen Newell was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theorist (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their contributions to artificial intelligence and the psychology of human cognition.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In the history of artificial intelligence (AI), neat and scruffy are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory. Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
In intelligence and artificial intelligence, an intelligent agent (IA) is an agent that perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge.
The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.
The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.