British Museum algorithm

Last updated

The British Museum algorithm is a general approach to finding a solution by checking all possibilities one by one, beginning with the smallest. The term refers to a conceptual, not a practical, technique where the number of possibilities is enormous.

Contents

Newell, Shaw, and Simon [1] called this procedure the British Museum algorithm

"... since it seemed to them as sensible as placing monkeys in front of typewriters in order to reproduce all the books in the British Museum."

See also

Sources

PD-icon.svg This article incorporates public domain material from Paul E. Black. "British Museum technique". Dictionary of Algorithms and Data Structures . NIST..

Related Research Articles

<span class="mw-page-title-main">Herbert A. Simon</span> American political scientist, economist, sociologist, and psychologist

Herbert Alexander Simon was an American political scientist, with a Ph.D. in political science, whose work also influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organizations and he is best known for the theories of "bounded rationality" and "satisficing". He received the Nobel Memorial Prize in Economic Sciences in 1978 and the Turing Award in computer science in 1975. His research was noted for its interdisciplinary nature and spanned across the fields of cognitive science, computer science, public administration, management, and political science. He was at Carnegie Mellon University for most of his career, from 1949 to 2001, where he helped found the Carnegie Mellon School of Computer Science, one of the first such departments in the world.

<span class="mw-page-title-main">Public-key cryptography</span> Cryptographic system with public and private keys

Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.

<span class="mw-page-title-main">Painter's algorithm</span> Algorithm for visible surface determination in 3D graphics

The painter’s algorithm is an algorithm for visible surface determination in 3D computer graphics that works on a polygon-by-polygon basis rather than a pixel-by-pixel, row by row, or area by area basis of other Hidden Surface Removal algorithms. The painter’s algorithm creates images by sorting the polygons within the image by their depth and placing each polygon in order from the farthest to the closest object.

A* is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases.

Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games. It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

Experimental mathematics is an approach to mathematics in which computation is used to investigate mathematical objects and identify properties and patterns. It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental exploration of conjectures and more informal beliefs and a careful analysis of the data acquired in this pursuit."

<span class="mw-page-title-main">Allen Newell</span> American cognitive scientist

Allen Newell was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their basic contributions to artificial intelligence and the psychology of human cognition.

Information Processing Language (IPL) is a programming language created by Allen Newell, Cliff Shaw, and Herbert A. Simon at RAND Corporation and the Carnegie Institute of Technology about 1956. Newell had the job of language specifier-application programmer, Shaw was the system programmer, and Simon had the job of application programmer-user.

John Clifford Shaw was a systems programmer at the RAND Corporation. He is a coauthor of the first artificial intelligence program, the Logic Theorist, and was one of the developers of General Problem Solver and Information Processing Language. It is considered the true "father" of the JOSS language. One of the most significant events that occurred in the programming was the development of the concept of list processing by Allen Newell, Herbert A. Simon and Cliff Shaw during the development of the language IPL-V. He invented the linked list, which remains fundamental in many strands of modern computing technology.

<span class="mw-page-title-main">Logic in computer science</span> Academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, the GPS works with means–ends analysis.

<span class="mw-page-title-main">Mathematical psychology</span> Mathematical modeling of psychological theories and phenomena

Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior. The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

Means–ends analysis (MEA) is a problem solving technique used commonly in artificial intelligence (AI) for limiting search in AI programs.

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

In molecular modelling, docking is a method which predicts the preferred orientation of one molecule to another when bound together in a stable complex. In the case of protein docking, the search space consists of all possible orientations of the protein with respect to the ligand. Flexible docking in addition considers all possible conformations of the protein paired with all possible conformations of the ligand.

Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning and is called "the first artificial intelligence program". See § Philosophical implications It would eventually prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica and find new and more elegant proofs for some.

In mathematical optimization and computer science, heuristic is a technique designed for solving a problem more quickly when classic methods are too slow for finding an approximate solution, or when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.

<span class="mw-page-title-main">Computational creativity</span> Multidisciplinary endeavour

Computational creativity is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.

References