Strategic Computing Initiative

Last updated

The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture, computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project. [1]

Contents

The inspiration for the program was Japan's fifth generation computer project, an enormous initiative that set aside billions for research into computing and artificial intelligence. As with Sputnik in 1957, the American government saw the Japanese project as a challenge to its technological dominance. [2] The British government also funded a program of their own around the same time, known as Alvey, and a consortium of U.S. companies funded another similar project, the Microelectronics and Computer Technology Corporation. [3] [4]

The goal of SCI, and other contemporary projects, was nothing less than full machine intelligence. "The machine envisioned by SC", according to Alex Roland and Philip Shiman, "would run ten billion instructions per second to see, hear, speak, and think like a human. The degree of integration required would rival that achieved by the human brain, the most complex instrument known to man." [5]

The initiative was conceived as an integrated program, similar to the Apollo moon program, [5] where different subsystems would be created by various companies and academic projects and eventually brought together into a single integrated system. Roland and Shiman wrote that "While most research programs entail tactics or strategy, SC boasted grand strategy, a master plan for an entire campaign." [1]

The project was funded by the Defense Advanced Research Projects Agency and directed by the Information Processing Technology Office (IPTO). By 1985 it had spent $100 million, and 92 projects were underway at 60 institutions: half in industry, half in universities and government labs. [2] Robert Kahn, who directed IPTO in those years, provided the project with its early leadership and inspiration. [6] Clint Kelly managed the SC Initiative for three years and developed many of the specific application programs for DARPA, such as the Autonomous Land Vehicle. [7]

By the late 1980s, it was clear that the project would fall short of realizing the hoped-for levels of machine intelligence. Program insiders pointed to issues with integration, organization, and communication. [8] When Jack Schwarz ascended to the leadership of IPTO in 1987, he cut funding to artificial intelligence research (the software component) "deeply and brutally", "eviscerating" the program (wrote Pamela McCorduck). [8] Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise. In his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". [8]

Although the program failed to meet its goal of high-level machine intelligence, [1] it did meet some of its specific technical objectives, for example those of autonomous land navigation. [9] The Autonomous Land Vehicle program and its sister Navlab project at Carnegie Mellon University, in particular, laid the scientific and technical foundation for many of the driverless vehicle programs that came after it, such as the Demo II and III programs (ALV being Demo I), Perceptor, and the DARPA Grand Challenge. [10] The use of video cameras plus laser scanners and inertial navigation units pioneered by the SCI ALV program form the basis of almost all commercial driverless car developments today. It also helped to advance the state of the art of computer hardware to a considerable degree. On the software side, the initiative funded development of the Dynamic Analysis and Replanning Tool, a program that handled logistics using artificial intelligence techniques. This was a huge success, saving the Department of Defense billions during Desert Storm. [4]

The project was superseded in the 1990s by the Accelerated Strategic Computing Initiative and then by the Advanced Simulation and Computing Program. These later programs did not include artificial general intelligence as a goal, but instead focused on supercomputing for large scale simulation, such as atomic bomb simulations. The Strategic Computing Initiative of the 1980s is distinct from the 2015 National Strategic Computing Initiative—the two are unrelated.

See also

Notes

  1. 1 2 3 Roland & Shiman 2002, p. 2.
  2. 1 2 McCorduck 2004, pp. 426–429.
  3. Crevier 1993, p. 240.
  4. 1 2 Russell & Norvig 2003, p. 25.
  5. 1 2 Roland & Shiman 2002, p. 4.
  6. Roland & Shiman 2002, p. 7.
  7. Roland, Alex (2002). Strategic computing : DARPA and the quest for machine intelligence, 1983-1993. Shiman, Philip. Cambridge, Mass.: MIT Press. ISBN   0262182262. OCLC   48449800.
  8. 1 2 3 McCorduck 2004, pp. 430–431.
  9. https://www.cs.ucsb.edu/~mturk/Papers/ALV.pdf [ bare URL PDF ]
  10. Technology development for Army unmanned ground vehicles. National Research Council (U.S.). Washington, D.C.: National Academies Press. 2002. ISBN   0309503655. OCLC   56118249.{{cite book}}: CS1 maint: others (link)

Related Research Articles

Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.

The Chinese Room Argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese Room.

<span class="mw-page-title-main">DARPA</span> Agency of the U.S. Department of Defense responsible for the development of new technologies

The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military.

<span class="mw-page-title-main">J. C. R. Licklider</span> American psychologist and computer scientist (1915-1990)

Joseph Carl Robnett Licklider, known simply as J. C. R. or "Lick", was an American psychologist and computer scientist who is considered to be among the most prominent figures in computer science development and general computing history.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s. In the 1990s and 21st century AI research adopted "neat" approaches almost exclusively and these have proven to be the most successful.

<span class="mw-page-title-main">MIT Computer Science and Artificial Intelligence Laboratory</span> CS and AI Laboratory at MIT (formed by merger in 2003)

Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory. Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.

Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI is also called strong AI, full AI, or general intelligent action, although some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">History of artificial intelligence</span> Overview of the history of artificial intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols.This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning and is called "the first artificial intelligence program". See § Philosophical implications It would eventually prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica and find new and more elegant proofs for some.

There are a number of competitions and prizes to promote research in artificial intelligence.

The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.

SNARC is a neural net machine designed by Marvin Lee Minsky. Prompted by a letter from Minsky, George Armitage Miller gathered the funding for the project from the Air Force Office of Scientific Research in the summer of 1951 with the work to be carried out by Minsky, who was then a graduate student in mathematics at Princeton University. At the time, a physics graduate student at Princeton, Dean S. Edmonds, volunteered that he was good with electronics and therefore Minsky brought him onto the project.

References