In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s. [1] [2] [3]
"Neats" use algorithms based on a single formal paradigms, such as logic, mathematical optimization or neural networks. Neats verify their programs are correct with theorems and mathematical rigor. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved to achieve general intelligence and superintelligence.
"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior. Scruffies rely on incremental testing to verify their programs and scruffy programming requires large amounts of hand coding or knowledge engineering. Scruffies have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no silver bullet that will allow programs to develop general intelligence autonomously.
John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more like biology, where much of the work involves studying and categorizing diverse phenomena. [lower-alpha 1]
Modern AI has elements of both scruffy and neat approaches. In the 1990s AI research applied mathematical rigor to their programs, as the neats did. [5] [6] They also express the hope that there is a single paradigm (a "master algorithm") that will cause general intelligence and superintelligence to emerge. [7] But modern AI also resembles the scruffies: [8] modern machine learning applications require a great deal of hand-tuning and incremental testing; while the general algorithm is mathematically rigorous, accomplishing the specific goals of a particular application is not. Also, in the early 2000s, the field of software development embraced extreme programming, which is a modern version of the scruffy methodology -- try things and test them, without wasting time looking for more elegant or general solutions.
The distinction between neat and scruffy originated in the mid-1970s, by Roger Schank. Schank used the terms to characterize the difference between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski and others whose work was based on logic and formal extensions of logic. [2] Schank described himself as an AI scruffy. He made this distinction in linguistics, arguing strongly against Chomsky's view of language. [lower-alpha 1]
The distinction was also partly geographical and cultural: "scruffy" attributes were exemplified by AI research at MIT under Marvin Minsky in the 1970s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours fine-tuning programs until they showed the required behavior. Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum's ELIZA, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd's [lower-alpha 2] SHRDLU, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm. [10] [11] SHRDLU, while successful, could not be scaled up into a useful natural language processing system, because it lacked a structured design. Maintaining a larger version of the program proved to be impossible, i.e. it was too scruffy to be extended.
Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University and the University of Edinburgh) focused on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy, Herbert Simon, Allen Newell, Donald Michie, Robert Kowalski, and other "neats".
The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions. They executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions.
In his 1983 presidential address to Association for the Advancement of Artificial Intelligence, Nils Nilsson discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic-like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischler of SRI International concurred about the anticipated role of deduction and logic-like formalisms in future AI research, but not to the extent that Nilsson described. [12]
The scruffy approach was applied to robotics by Rodney Brooks in the mid-1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control, the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move. [13]
Douglas Lenat's Cyc project was initiated in 1984 one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise". [14] The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful.
In 1986 Marvin Minsky published The Society of Mind which advocated a view of intelligence and the mind as an interacting community of modules or agents that each handled different aspects of cognition, where some modules were specialized for very specific tasks (e.g. edge detection in the visual cortex) and other modules were specialized to manage communication and prioritization (e.g. planning and attention in the frontal lobes). Minsky presented this paradigm as a model of both biological human intelligence and as a blueprint for future work in AI.
This paradigm is explicitly "scruffy" in that it does not expect there to be a single algorithm that can be applied to all of the tasks involved in intelligent behavior. [15] Minsky wrote:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. [16]
As of 1991, Minsky was still publishing papers evaluating the relative advantages of the neat versus scruffy approaches, e.g. “Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy”. [17]
New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms." [6] This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003. [18]
However, by 2021, Russell and Norvig had changed their minds. [19] Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology.
Neats
Scruffies
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems, as opposed to the natural intelligence of living beings. As a field of research in computer science focusing on the automation of intelligent behavior through machine learning, it develops and studies methods and software which enable machines to perceive their environment and take actions that maximize their chances of achieving defined goals, with the aim of performing tasks typically associated with human intelligence. Such machines may be called AIs.
Planner is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subset of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.
Terry Allen Winograd is an American professor of computer science at Stanford University, and co-director of the Stanford Human–Computer Interaction Group. He is known within the philosophy of mind and artificial intelligence fields for his work on natural language using the SHRDLU program.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
The blocks world is a planning domain in artificial intelligence. The algorithm is similar to a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at a time: it may either be placed on the table or placed atop another block. Because of this, any blocks that are, at a given time, under another block cannot be moved. Moreover, some kinds of blocks cannot have other blocks stacked on top of them.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them.
The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture, computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project.
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.
The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.
The Stochastic Neural Analog Reinforcement Calculator (SNARC) is a neural-net machine designed by Marvin Lee Minsky. Prompted by a letter from Minsky, George Armitage Miller gathered the funding for the project from the Air Force Office of Scientific Research in the summer of 1951 with the work to be carried out by Minsky, who was then a graduate student in mathematics at Princeton University. At the time, a physics graduate student at Princeton, Dean S. Edmonds, volunteered that he was good with electronics and therefore Minsky brought him onto the project.