Anti-computer tactics

Last updated
Deep Blue, a famous chess-playing computer that beat world champion Garry Kasparov in a human-computer match Deep Blue.jpg
Deep Blue, a famous chess-playing computer that beat world champion Garry Kasparov in a human–computer match

Anti-computer tactics are methods used by humans to try to beat computer opponents at various games, most typically board games such as chess and Arimaa. They are most associated with competitions against computer AIs that are playing to their utmost to win, rather than AIs merely programmed to be an interesting challenge that can be given intentional weaknesses and quirks by the programmer (as in many video game AIs). Such tactics are most associated with the era when AIs searched a game tree with an evaluation function looking for promising moves, often with Alpha–beta pruning or other minimax algorithms used to narrow the search. Against such algorithms, a common tactic is to play conservatively aiming for a long-term advantage. The theory is that this advantage will manifest slowly enough that the computer is unable to notice in its search, and the computer won't play around the threat correctly. This may result in, for example, a subtle advantage that eventually turns into a winning chess endgame with a passed pawn. (Conversely, attempting to lure an AI into a short-term "trap", inviting the play of a reasonable-seeming to humans but actually disastrous move, will essentially never work against a computer in games of perfect information.)

Contents

The field is most associated with the 1990s and early 2000s, when computers were very strong at games such as chess, yet beatable. Even then, the efficacy of such tactics was questionable, with several tactics such as making unusual or suboptimal moves to quickly get the computer out of its opening book proving ineffective in human-computer tournaments. The rise of machine learning has also dented the applicability of anti-computer tactics, as machine learning algorithms tend to play the long game equally as well if not better than human players.

Common aspects

One aspect of designing a classic AI for games of perfect information is the horizon effect. Computer AIs examine a game tree of possible moves and counter-moves, but unless a forced win is in the tree, it needs to stop exploring new possibilities eventually. When it does, an evaluation function is called on the board state, which often uses rough heuristics to determine which side the board favors. In chess, this might be things like material advantage (extra pieces), control of the center, king safety, and pawn structure. Exploiting the horizon effect can be done by human players by using a strategy whose fruits are apparent only beyond the plies examined by the AI. For example, if the AI is examining 10 plies ahead, and a strategy will "pay off" in 12-20 plies (6-10 turns), the AI won't play around the looming threat that it can't "see", similar to a person being unable to see "over the horizon" where a ship might be hid by the natural curvature of the earth. Similarly, to keep the horizon short, human players may want to keep as complicated a board state as possible. Simplifying the board by trading pieces lets the AI look "farther" into the future, as there are fewer options to consider, and thus is avoided when trying to exploit the horizon effect.

A tactic that works best on AIs that are very "deterministic" and known to play in one specific way in response to a threat is to force a situation where the human knows exactly how the AI will respond. If the human picks a situation that they believe the AI handles poorly, this can lead to reliably luring the AI into such situations. Even if the AI can handle that particular play style well, if the human is confident that the AI will always pick it, it simplifies preparation for the human player - they can just learn this one situation very closely, knowing that the AI will always accept an invitation to play into that kind of board.

AI games based on Monte-Carlo tree search have opposite strengths and weaknesses to alpha-beta AIs. While they tend to be better at long-term strategy, they have problems dealing with traps. [1] Once Monte-Carlo AIs fall into a trap, they can continue to play badly for a considerable period afterwards and may not recover. [2]

While patiently accumulating an advantage may be a beneficial tactic against alpha-beta AIs who play tactically, MCTS-based AIs like AlphaGo may themselves play in this patient strategic manner. [2] Thus deliberately tactical play, which is a bad approach against alpha-beta, becomes a viable anti-computer tatctic against MCTS.

Adversarial perturbations

Game AIs based on neural networks can be susceptible to adversarial perturbations, where playing a meaningless move alters the AI's evaluation of the position and makes it lose. Lan et al. developed an algorithm to find modifications of board states that would lead KataGo to play inferior moves. [3] However, like adversarial examples in image recognition, these attacks are hard to devise without computer assistance.

Chess

In the 1997 Deep Blue versus Garry Kasparov match, Kasparov played an anti-computer tactic move at the start of the game to get Deep Blue out of its opening book. [4] Kasparov chose the unusual Mieses Opening and thought that the computer would play the opening poorly if it had to play itself (that is, rely on its own skills rather than use its opening book). [5] Kasparov played similar anti-computer openings in the other games of the match, but the tactic backfired. [6] About the two matches, Kasparov wrote after the second game, where he chose the Ruy López, “We decided that using the same passive anti-computer strategy with black would be too dangerous. With white I could control the pace of the game much better and wait for my chances. With black it would be safer to play a known opening even if it was in Deep Blue's book especially if it was a closed opening where it would have difficulty finding a plan. The downside with this strategy as in all the games was that it wasn't my style either. While I was playing anti-computer chess I was also playing anti-Kasparov chess.”

The Brains in Bahrain was an eight-game chess match between human chess grandmaster, and then World Champion, Vladimir Kramnik and the computer program Deep Fritz 7, held in October 2002. The match ended in a tie 4–4, with two wins for each participant and four draws, worth half a point each. [7]

Anti-computer chess games

Arimaa

Arimaa is a chess derivative specifically designed to be difficult for alpha-beta pruning AIs, inspired by Kasparov's loss to Deep Blue in 1997. It allows 4 actions per "move" for a player, greatly increasing the size of the search space, and can reasonably end with a mostly full board and few captured pieces, avoiding endgame tablebase style "solved" positions due to scarcity of units. While human Arimaa players held out longer than chess, they too fell to superior computer AIs in 2015. [8]

Related Research Articles

<span class="mw-page-title-main">Deep Blue (chess computer)</span> Chess-playing computer made by IBM

Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Development began in 1985 at Carnegie Mellon University under the name ChipTest. It then moved to IBM, where it was first renamed Deep Thought, then again in 1989 to Deep Blue. It first played world champion Garry Kasparov in a six-game match in 1996, where it lost four games to two. It was upgraded in 1997 and in a six-game re-match, it defeated Kasparov by winning two games and drawing three. Deep Blue's victory is considered a milestone in the history of artificial intelligence and has been the subject of several books and films.

<span class="mw-page-title-main">Computer chess</span> Computer hardware and software capable of playing chess

Computer chess includes both hardware and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess grandmaster or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, Leela Chess Zero, GNU Chess, Fruit, and other free open source applications are available for various platforms.

Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player combinatorial games. It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

<span class="mw-page-title-main">Computer Go</span> Field of artificial intelligence around Go computer programs

Computer Go is the field of artificial intelligence (AI) dedicated to creating a computer program that plays the traditional board game Go. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such as alpha-beta minimax that performed well as AIs for checkers and chess fell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation of human-like AI.

In the context of combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minmax algorithms such as MCTS can be used in cases where a complete game tree is not feasible.

Arimaa is a two-player strategy board game that was designed to be playable with a standard chess set and difficult for computers while still being easy to learn and fun to play for humans. It was invented between 1997 and 2002 by Omar Syed, an Indian-American computer engineer trained in artificial intelligence. Syed was inspired by Garry Kasparov's defeat at the hands of the chess computer Deep Blue to design a new game which could be played with a standard chess set, would be difficult for computers to play well, but would have rules simple enough for his then four-year-old son Aamir to understand.

<span class="mw-page-title-main">Deep Blue versus Kasparov, 1997, Game 6</span> Chess game between human and computer

Game 6 of the Deep Blue–Kasparov rematch, played in New York City on May 11, 1997 and starting at 3:00 p.m. EDT, was the last chess game in the 1997 rematch of Deep Blue versus Garry Kasparov.

<span class="mw-page-title-main">Deep Blue versus Garry Kasparov</span> 1996 and 1997 chess matches

Deep Blue versus Garry Kasparov was a pair of six-game chess matches between then-world chess champion Garry Kasparov and an IBM supercomputer called Deep Blue. Kasparov won the first match, held in Philadelphia in 1996, by 4–2. Deep Blue won a 1997 rematch held in New York City by 3½–2½. The second match was the first defeat of a reigning world chess champion by a computer under tournament conditions, and was the subject of a documentary film, Game Over: Kasparov and the Machine.

This article documents the progress of significant human–computer chess matches.

Computer Arimaa refers to the playing of the board game Arimaa by computer programs.

In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree.

AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules.

Darkforest is a computer go program developed by Meta Platforms, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3.

AlphaGo versus Lee Sedol, also known as the DeepMind Challenge Match, was a five-game Go match between top Go player Lee Sedol and AlphaGo, a computer Go program developed by DeepMind, played in Seoul, South Korea between 9 and 15 March 2016. AlphaGo won all but the fourth game; all games were won by resignation. The match has been compared with the historic chess match between Deep Blue and Garry Kasparov in 1997.

<span class="mw-page-title-main">AlphaZero</span> Game-playing artificial intelligence

AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero.

OpenAI Five is a computer program by OpenAI that plays the five-on-five video game Dota 2. Its first public appearance occurred in 2017, where it was demonstrated in a live one-on-one game against the professional player Dendi, who lost to it. The following year, the system had advanced to the point of performing as a full team of five, and began playing against and showing the capability to defeat professional teams.

Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems.

KataGo is a free and open-source computer Go program, capable of defeating top-level human players. First released on 27 February 2019, it is developed by David Wu.

The history of chess began nearly 1500 years ago, and over the past century and a half the game has changed drastically. No technology or strategy, however, has changed chess as much as the introduction of chess engines. Despite only coming into existence within the previous 70 years, the introduction of chess engines has molded and defined how top chess is played today.

References

  1. Baier, Hendrik; Winands, Mark H. M. (2014). "Monte-Carlo Tree Search and Minimax Hybrids with Heuristic Evaluation Functions". Computer Games (PDF). Vol. 504. Cham: Springer International Publishing. p. 45–63. doi:10.1007/978-3-319-14923-3_4. ISBN   978-3-319-14922-6. In tactical games such as Chess, a large number of traps exist in the search space. These require precise play to avoid immediate loss, and the selective sampling of MCTS based on average simulation outcomes can easily miss or underestimate an important move.
  2. 1 2 "Lee Sedol defeats AlphaGo in masterful comeback". gogameguru.com. 2016-03-13. Archived from the original on 2018-03-06. Retrieved 2024-09-10.
  3. Lan, Li-Cheng; Zhang, Huan; Wu, Ti-Rong; Tsai, Meng-Yu; Wu, I-Chen; Hsieh, Cho-Jui (2022-12-06). "Are AlphaZero-like Agents Robust to Adversarial Perturbations?". Advances in Neural Information Processing Systems. 35: 11229–11240. Retrieved 2024-09-10.
  4. Daily Chess Columns-All the News That's Fit to Mock. 3) Anti-computer chess. Archived 2007-08-08 at the Wayback Machine (broken) from ChessBase
  5. Chess Life , Special Summer Issue 1997.
  6. How Much Longer Can Man Match the Computer? - The Fall of Man from ChessCafe.com
  7. ChessBase.com - Chess News - Fritz Defends to Draw Game 8 and the Match! Final score: 4–4
  8. Wu, David (2015). "Designing a Winning Arimaa Program" (PDF). ICGA Journal . 38 (1): 19–40. doi:10.3233/ICG-2015-38104.