Candidate move

Last updated

In abstract strategy board games, candidate moves are moves which, upon initial observation of the position, seem to warrant further analysis. [1] [2] [3] Although in theory the idea of candidate moves can be applied to games such as checkers, go, and xiangqi, it is most often used in the context of chess.

Contents

History

The idea of candidate moves was first put forth by Grandmaster Alexander Kotov in his book Think Like a Grandmaster. In it, Kotov recommended looking for several moves that seemed feasible – the so-called candidate moves – and then analyzing those moves one at a time. Although this idea had been practiced by expert chess players for some time, it had never been explicitly articulated, and was relatively unknown to players at the amateur level.

The idea quickly caught on, and is now considered standard practice among chess players at all levels. Many beginning players are taught about candidate moves as soon as they learn to play the game, and there are numerous references to the idea in other chess books.

Finding candidate moves

Finding the correct candidate moves is often one of the most difficult aspects of becoming a better chess player. Kotov, as well as other teachers, recommend using a system of pattern recognition, looking at the elements of the current position to determine what might be a feasible move. For example, if a player notices that his opponent's king is on the g8 square, and that his knight is on f3, then a candidate move might be Ng5, a fairly common beginning to a sacrifice.

Once a player has found a good number of candidate moves (every position is different, although four to six moves is usually the norm), a player may then begin to systematically analyze these moves. The idea behind candidate moves is to help structure one's analysis and prevent it from becoming jumbled; inexperienced players who do not carefully consider candidate moves will often find themselves jumping between lines of analysis haphazardly.

Computer chess

The ability of humans to find candidate moves remains one of the main differences between them and computers. Although early chess programmers made admirable efforts to make computers able to select candidate moves (see Type A versus Type B programs), they never played particularly well, and were soon supplanted by computers using brute-force algorithms (Shenk, 2006). The addition of alpha–beta algorithms made the latter type even more feasible. Many acknowledged that computers were simply not capable of performing the complex pattern recognition that was required to find appropriate candidate moves, and that it was easier to have computers perform simple exhaustive searches.

Today, most chess programs still rely mainly on brute-force searches, but as search algorithms have improved, today's chess engines seem more and more to be using candidate moves in their analysis. Hydra and AlphaZero, for example, are widely considered to be a "Type B" (candidate move finding) computer.

Related Research Articles

Minmax is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

<span class="mw-page-title-main">Search algorithm</span> Any algorithm which solves the search problem

In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.

<span class="mw-page-title-main">Computer chess</span> Computer hardware and software capable of playing chess

Computer chess includes both hardware and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess master or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, GNU Chess, Fruit, and other free open source applications are available for various platforms.

In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement.

This glossary of chess explains commonly used terms in chess, in alphabetical order. Some of these terms have their own pages, like fork and pin. For a list of unorthodox chess pieces, see Fairy chess piece; for a list of terms specific to chess problems, see Glossary of chess problems; for a list of named opening lines, see List of chess openings; for a list of chess-related games, see List of chess variants; for a list of terms general to board games, see Glossary of board games.

Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games. It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

In computer science and artificial intelligence, combinatorial search studies search algorithms for solving instances of problems that are believed to be hard in general, by efficiently exploring the usually large solution space of these instances. Combinatorial search algorithms achieve this efficiency by reducing the effective size of the search space or employing heuristics. Some algorithms are guaranteed to find the optimal solution, while others may only return the best solution found in the part of the state space that was explored.

<span class="mw-page-title-main">Computer Go</span> Field of artificial intelligence around Go computer programs

Computer Go is the field of artificial intelligence (AI) dedicated to creating a computer program that plays the traditional board game Go. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such as alpha-beta minimax that performed well as AIs for checkers and chess fell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation of human-like AI.

The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few plies down the game tree. Thus, for a computer searching only five plies, there is a possibility that it will make a detrimental move, but the effect is not visible because the computer does not search to the depth of the error.

In the context of Combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minimax algorithms such as MCTS can be used in cases where a complete game tree is not feasible.

<span class="mw-page-title-main">Combinatorial game theory</span> Branch of game theory about two-player sequential games with perfect information

Combinatorial game theory is a branch of mathematics and theoretical computer science that typically studies sequential games with perfect information. Study has been largely confined to two-player games that have a position that the players take turns changing in defined ways or moves to achieve a defined winning condition. Combinatorial game theory has not traditionally studied games of chance or those that use imperfect or incomplete information, favoring games that offer perfect information in which the state of the game and the set of available moves is always known by both players. However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, thus the boundaries of the field are ever changing. Scholars will generally define what they mean by a "game" at the beginning of a paper, and these definitions often vary as they are specific to the game being analyzed and are not meant to represent the entire scope of the field.

<span class="mw-page-title-main">Alexander Kotov</span> Soviet chess player (1913–1981)

Alexander Alexandrovich Kotov was a Soviet chess grandmaster and author. He was a Soviet chess champion, a two-time world title Candidate, and a prolific writer on the subject of chess. Kotov served in high posts in the Soviet Chess Federation, and wrote most of his books during the Cold War. The importance and breadth of Kotov's work rank him among the all-time greats in this field.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

Principal variation search is a negamax algorithm that can be faster than alpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing the minimax value of a node in a tree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage.

<span class="mw-page-title-main">Alexander Brudno</span> Russian computer scientist (1918–2009)

Alexander L'vovich Brudno was a Russian computer scientist, best known for fully describing the alpha-beta pruning algorithm. From 1991 until his death he lived in Israel.

A variation can refer to a specific sequence of successive moves in a turn-based game, often used to specify a hypothetical future state of a game that is being played. Although the term is most commonly used in the context of Chess analysis, it has been applied to other games. It also is a useful term used when describing computer tree-search algorithms for playing games such as Go or Chess.

<span class="mw-page-title-main">Chess theory</span> Basic chess fundamentals and ideas developed to better understand the game

The game of chess is commonly divided into three phases: the opening, middlegame, and endgame. There is a large body of theory regarding how the game should be played in each of these phases, especially the opening and endgame. Those who write about chess theory, who are often also eminent players, are referred to as "theorists" or "theoreticians".

<span class="mw-page-title-main">AlphaZero</span> Game-playing artificial intelligence

AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero.

The history of chess began nearly 1500 years ago, and over the past millennium and a half the game has changed drastically. No technology or strategy, however, has changed chess as much as the introduction of chess engines. Despite only coming into existence within the previous 70 years, the introduction of chess engines has molded and defined how top chess is played today.

References

  1. Pandolfini, Bruce (1995-04-18). Chess Thinking: The Visual Dictionary of Chess Moves, Rules, Strategies and Concepts. Simon and Schuster. p. 59. ISBN   978-0-671-79502-3.
  2. Harding, Tim (2012-05-23). "Unit 24 Complications". Better Chess for Average Players. Courier Corporation. ISBN   978-0-486-13369-0.
  3. Sutton, Richard S.; Barto, Andrew G. (2018-11-13). Reinforcement Learning: An Introduction. MIT Press. p. 425. ISBN   978-0-262-03924-6.