This article needs additional citations for verification .(February 2010) |
SSS* is a search algorithm, introduced by George Stockman in 1979, that conducts a state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm.
SSS* is based on the notion of solution trees. Informally, a solution tree can be formed from any arbitrary game tree by pruning the number of branches at each MAX node to one. Such a tree represents a complete strategy for MAX, since it specifies exactly one MAX action for every possible sequence of moves made by the opponent. Given a game tree, SSS* searches through the space of partial solution trees, gradually analyzing larger and larger subtrees, eventually producing a single solution tree with the same root and Minimax value as the original game tree. SSS* never examines a node that alpha–beta pruning would prune, and may prune some branches that alpha–beta would not. Stockman speculated that SSS* may therefore be a better general algorithm than alpha–beta. However, Igor Roizen and Judea Pearl have shown [1] that the savings in the number of positions that SSS* evaluates relative to alpha/beta is limited and generally not enough to compensate for the increase in other resources (e.g., the storing and sorting of a list of nodes made necessary by the best-first nature of the algorithm). However, Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie de Bruin have shown that a sequence of null-window alpha–beta calls is equivalent to SSS* (i.e., it expands the same nodes in the same order) when alpha–beta is used with a transposition table, as is the case in all game-playing programs for chess, checkers, etc. Now the storing and sorting of the OPEN list were no longer necessary. This allowed the implementation of (an algorithm equivalent to) SSS* in tournament quality game-playing programs. Experiments showed that it did indeed perform better than Alpha–Beta in practice, but that it did not beat NegaScout. [2]
The reformulation of a best-first algorithm as a sequence of depth-first calls prompted the formulation of a class of null-window alpha–beta algorithms, of which MTD(f) is the best known example.
There is a priority queue OPEN that stores states or the nodes, where - node identificator (Dot-decimal notation is used to identify nodes, is a root), - state of the node (L - the node is live, which means it's not solved yet and S - the node is solved), - value of the solved node. Items in OPEN queue are sorted descending by their value. If more than one node has the same value of , a node left-most in the tree is chosen.
OPEN := { (e, L, inf) } while true do // repeat until stopped pop an element p=(J, s, h) from the head of the OPEN queue ifJ = eands = Sthen STOP the algorithm and return h as a result else apply Gamma operator for p
operator for is defined in the following way:
ifs = LthenifJ is a terminal node then (1.) add (J, S, min(h, value(J))) to OPEN else ifJ is a MIN node then (2.) add (J.1, L, h) to OPEN else (3.) for j=1..number_of_children(J) add (J.j, L, h) to OPEN elseifJ is a MIN node then (4.) add (parent(J), S, h) to OPEN remove from OPEN all the states that are associated with the children of parent(J) else if is_last_child(J) then // if J is the last child of parent(J) (5.) add (parent(J), S, h) to OPEN else (6.) add (parent(J).(k+1), L, h) to OPEN // add state associated with the next child of parent(J) to OPEN
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".
Minmax is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.
Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored.
A* is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases.
Best-first search is a class of search algorithms, which explores a graph by expanding the most promising node chosen according to a specified rule.
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games. It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.
In the context of Combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minimax algorithms such as MCTS can be used in cases where a complete game tree is not feasible.
Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.
The expectiminimax algorithm is a variation of the minimax algorithm, for use in artificial intelligence systems that play two-player zero-sum games, such as backgammon, in which the outcome depends on a combination of the player's skill and chance elements such as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" nodes, which take the expected value of a random event occurring. In game theory terms, an expectiminimax tree is the game tree of an extensive-form game of perfect, but incomplete information.
Negamax search is a variant form of minimax search that relies on the zero-sum property of a two-player game.
Principal variation search is a negamax algorithm that can be faster than alpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing the minimax value of a node in a tree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage.
MTD(f) is an alpha-beta game tree search algorithm modified to use ‘zero-window’ initial search bounds, and memory (usually a transposition table) to reuse intermediate search results. MTD(f) is a shortened form of MTD(n,f) which stands for Memory-enhanced Test Driver with node ‘n’ and value ‘f’. The efficacy of this paradigm depends on a good initial guess, and the supposition that the final minimax value lies in a narrow window around the guess (which becomes an upper/lower bound for the search from root). The memory structure is used to save an initial guess determined elsewhere.
Alexander Reinefeld is a German computer scientist and games researcher. He is the head of computer science at Zuse Institute Berlin. His contributions to the field include the NegaScout algorithm.
SMA* or Simplified Memory Bounded A* is a shortest path algorithm based on the A* algorithm. The main advantage of SMA* is that it uses a bounded memory, while the A* algorithm might need exponential memory. All other characteristics of SMA* are inherited from A*.
In computer science, a min-max heap is a complete binary tree data structure which combines the usefulness of both a min-heap and a max-heap, that is, it provides constant time retrieval and logarithmic time removal of both the minimum and maximum elements in it. This makes the min-max heap a very useful data structure to implement a double-ended priority queue. Like binary min-heaps and max-heaps, min-max heaps support logarithmic insertion and deletion and can be built in linear time. Min-max heaps are often represented implicitly in an array; hence it's referred to as an implicit data structure.
In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree.
Best node search (BNS), originally known as fuzzified game tree search, is a minimax search algorithm, developed in 2011. The idea is that the knowledge that one subtree is relatively better than some other(s) may be propagated sooner than the absolute value of minimax for that subtree. Then a repetitive search narrows until a particular node is shown to be relatively best.