Artificial Intelligence: A General Survey, commonly known as the Lighthill report, is a scholarly article by James Lighthill, published in Artificial Intelligence: a paper symposium in 1973. [1] It was compiled by Lighthill for the British Science Research Council as an evaluation of academic research in the field of artificial intelligence (AI). The report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "In no part of the field have the discoveries made so far produced the major impact that was then promised". It "formed the basis for the decision by the British government to end support for AI research in most British universities", [2] contributing to an AI winter in Britain.
It was commissioned by the SRC in 1972 for Lighthill to "make a personal review of the subject [of AI]". Lighthill completed the report in July. The SRC discussed the report in September, and decided to publish it, together with some alternative points of view by Stuart Sutherland, Roger Needham, Christopher Longuet-Higgins, and Donald Michie. [1] : preface The SRC's decision to invite the report was partly a reaction to high levels of discord within the University of Edinburgh's Department of Artificial Intelligence, one of the earliest and biggest centres for AI research in the UK. [3]
On May 9, 1973, Lighthill debated several leading AI researchers (Donald Michie, John McCarthy, Richard Gregory) at the Royal Institution in London concerning the report. [4]
While the report was supportive of research into the simulation of neurophysiological and psychological processes, it was "highly critical of basic research in foundational areas such as robotics and language processing". [1] The report stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real-world domains. That is, the report states that whilst AI techniques may have worked within the scope of small problem domains, the techniques would not scale up well to solve more realistic problems. The report represents a pessimistic view of AI that began after early excitement in the field.
The report divides AI research into three categories:
Projects in category A had had some success, but only in restricted domains where a large quantity of detailed knowledge was used in designing the program. This was disappointing to researchers who hoped for generic methods. Due to the issue of the combinatorial explosion, the amount of detailed knowledge required by the program quickly grew too large to be entered by hand, thus restricting projects to restricted domains.
Projects in category C had had some measure of success. Artificial neural networks were successfully used to model neurobiological data. SHRDLU demonstrated that human use of language, even in fine details, depends on the semantics or knowledge, and is not purely syntactical. This was influential in psycholinguistics. Attempts to extend SHRDLU to larger domains of discourse was considered impractical, again due to the issue of the combinatorial explosion.
Projects in category B were held to be failures. One important project, that of "programming and building a robot that would mimic human ability in a combination of eye-hand co-ordination and common-sense problem solving", was considered entirely disappointing. Similarly, chess playing programs were no better than human amateurs. Due to the combinatorial explosion, the run-time of general algorithms quickly grew impractical, requiring detailed problem-specific heuristics.
The report stated that it was expected that within the next 25 years, category A would simply become applied technologies engineering, C would integrate with psychology and neurobiology, while category B would be abandoned.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Planner is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).
Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In the history of artificial intelligence (AI), neat and scruffy are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
The School of Informatics is an academic unit of the University of Edinburgh, in Scotland, responsible for research, teaching, outreach and commercialisation in informatics. It was created in 1998 from the former department of artificial intelligence, the Centre for Cognitive Science and the department of computer science, along with the Artificial Intelligence Applications Institute (AIAI) and the Human Communication Research Centre.
Sir Michael James Lighthill was a British applied mathematician, known for his pioneering work in the field of aeroacoustics and for writing the Lighthill report, which pessimistically stated that "In no part of the field have the discoveries made so far produced the major impact that was then promised", contributing to the gloomy climate of AI winter.
General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, the GPS works with means–ends analysis.
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.
The blocks world is a planning domain in artificial intelligence. The algorithm is similar to a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. Only one block may be moved at a time: it may either be placed on the table or placed atop another block. Because of this, any blocks that are, at a given time, under another block cannot be moved. Moreover, some kinds of blocks cannot have other blocks stacked on top of them.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
Donald Michie was a British researcher in artificial intelligence. During World War II, Michie worked for the Government Code and Cypher School at Bletchley Park, contributing to the effort to solve "Tunny", a German teleprinter cipher.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception.
Joseph Leslie Armstrong was a computer scientist working in the area of fault-tolerant distributed systems. He is best known as one of the co-designers of the Erlang programming language.
In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.
The Turing Institute was an artificial intelligence laboratory in Glasgow, Scotland, between 1983 and 1994. The company undertook basic and applied research, working directly with large companies across Europe, the United States and Japan developing software as well as providing training, consultancy and information services.
Thomas L. Dean is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. He has authored several influential textbooks on artificial intelligence.