AlphaGo Zero

Last updated

AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version. [1] By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days. [2]

Contents

Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills because expert data is "often expensive, unreliable or simply unavailable." [3] Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge". [4] Furthermore, AlphaGo Zero performed better than standard reinforcement deep learning models (such as DQN implementations [5] ) due to its integration of Monte Carlo tree search. David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalised AI algorithms by removing the need to learn from humans. [6]

Google later developed AlphaZero, a generalized version of AlphaGo Zero that could play chess and Shōgi in addition to Go. [7] In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale. AlphaZero also defeated a top chess program (Stockfish) and a top Shōgi program (Elmo). [8] [9]

Architecture

The network in AlphaGo Zero is a ResNet with two heads. [1] :Appendix: Methods

Training

AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four TPUs were used for inference. The neural network initially knew nothing about Go beyond the rules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions. The AI engaged in reinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome. [10] In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession. [11] It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level. [12]

For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run. [13] DeepMind submitted its initial findings in a paper to Nature in April 2017, which was then published in October 2017. [1]

Hardware cost

The hardware cost for a single AlphaGo Zero system in 2017, including the four TPUs, has been quoted as around $25 million. [14]

Applications

According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such as protein folding (see AlphaFold) or accurately simulating chemical reactions. [15] AlphaGo's techniques are probably less useful in domains that are difficult to simulate, such as learning how to drive a car. [16] DeepMind stated in October 2017 that it had already started active work on attempting to use AlphaGo Zero technology for protein folding, and stated it would soon publish new findings. [17] [18]

Reception

AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo. Oren Etzioni of the Allen Institute for Artificial Intelligence called AlphaGo Zero "a very impressive technical result" in "both their ability to do it—and their ability to train the system in 40 days, on four TPUs". [10] The Guardian called it a "major breakthrough for artificial intelligence", citing Eleni Vasilaki of Sheffield University and Tom Mitchell of Carnegie Mellon University, who called it an impressive feat and an “outstanding engineering accomplishment" respectively. [16] Mark Pesce of the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory". [19]

Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains". [11]

In response to the reports, South Korean Go professional Lee Sedol said, "The previous version of AlphaGo wasn’t perfect, and I believe that’s why AlphaGo Zero was made." On the potential for AlphaGo's development, Lee said he will have to wait and see but also said it will affect young Go players. Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo's playing style. "At first, it was hard to understand and I almost felt like I was playing against an alien. However, having had a great amount of experience, I’ve become used to it," Mok said. "We are now past the point where we debate the gap between the capability of AlphaGo and humans. It’s now between computers." Mok has reportedly already begun analyzing the playing style of AlphaGo Zero along with players from the national team. "Though having watched only a few matches, we received the impression that AlphaGo Zero plays more like a human than its predecessors," Mok said. [20] Chinese Go professional Ke Jie commented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest. Humans seem redundant in front of its self-improvement." [21]

Comparison with predecessors

Configuration and strength [22]
VersionsPlaying hardware [23] Elo rating Matches
AlphaGo Fan 176 GPUs, [2] distributed3,144 [1] 5:0 against Fan Hui
AlphaGo Lee 48 TPUs, [2] distributed3,739 [1] 4:1 against Lee Sedol
AlphaGo Master 4 TPUs, [2] single machine4,858 [1] 60:0 against professional players;

Future of Go Summit

AlphaGo Zero (40 days)4 TPUs, [2] single machine5,185 [1] 100:0 against AlphaGo Lee

89:11 against AlphaGo Master

AlphaZero (34 hours)4 TPUs, single machine [8] 4,430 (est.) [8] 60:40 against a 3-day AlphaGo Zero

AlphaZero

On 5 December 2017, DeepMind team released a preprint on arXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play in chess, shogi, and Go, defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case. [8]

AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include: [8]

An open source program, Leela Zero, based on the ideas from the AlphaGo papers is available. It uses a GPU instead of the TPUs recent versions of AlphaGo rely on.

Related Research Articles

<span class="mw-page-title-main">Computer Go</span> Field of artificial intelligence around Go computer programs

Computer Go is the field of artificial intelligence (AI) dedicated to creating a computer program that plays the traditional board game Go. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such as alpha-beta minimax that performed well as AIs for checkers and chess fell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation of human-like AI.

<span class="mw-page-title-main">Demis Hassabis</span> British entrepreneur and artificial intelligence researcher (born 1976)

Sir Demis Hassabis is a British computer scientist, artificial intelligence researcher and entrepreneur. In his early career he was a video game AI programmer and designer, and an expert board games player. He is the chief executive officer and co-founder of DeepMind and Isomorphic Labs, and a UK Government AI Advisor.

Google DeepMind Technologies Limited is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is based in London, with research centres in Canada, France, Germany, and the United States.

AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules.

<span class="mw-page-title-main">Fan Hui</span> Chinese-born French Go player

Fan Hui is a Chinese-born French Go player. Becoming a professional Go player in 1996, Fan moved to France in 2000 and became the coach of the French national Go team in 2005. He was the winner of the European Go Championship in 2013, 2014 and 2015. As of 2015, he is ranked as a 2 dan professional. He additionally won the 2016 European Professional Go Championship.

David Silver is a principal research scientist at Google DeepMind and a professor at University College London. He has led research on reinforcement learning with AlphaGo, AlphaZero and co-lead on AlphaStar.

Aja Huang is a Taiwanese computer scientist and expert on artificial intelligence. He works for DeepMind and was a member of the AlphaGo project.

Master is a version of DeepMind's Go software AlphaGo, named after the account name used online, which won 60 straight online games against human professional Go players from 29 December 2016 to 4 January 2017. This version was also used in the Future of Go Summit in May 2017. It used four TPUs on a single machine with Elo rating 4,858. DeepMind claimed that AlphaGo Master was 3-stone stronger than the version used in AlphaGo v. Lee Sedol.

The Future of Go Summit was held in May 2017 by the Chinese Go Association, Sport Bureau of Zhejiang Province and Google in Wuzhen, Zhejiang, the permanent host of the World Internet Conference. It featured five Go games involving AlphaGo and top Chinese Go players, as well as a forum on the future of AI. It was Google’s biggest public event in partnership with the Chinese government since Google China's search engine was moved out of mainland China to Hong Kong due to the government censorship in 2010. It was seen as a charm offensive launched by Google toward Chinese officials, being part of effort to reopen China's market.

AlphaGo versus Ke Jie was a three-game Go match between the computer Go program AlphaGo Master and current world No. 1 ranking player Ke Jie, being part of the Future of Go Summit in Wuzhen, China, played on 23, 25, and 27 May 2017. AlphaGo defeated Ke Jie in all three games.

AlphaGo versus Fan Hui was a five-game Go match between European champion Fan Hui, a 2-dan professional, and AlphaGo, a computer Go program developed by DeepMind, held at DeepMind's headquarters in London in October 2015. AlphaGo won all five games. This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap. This match was not disclosed to the public until 27 January 2016 to coincide with the publication of a paper in the journal Nature describing the algorithms AlphaGo used.

Leela Zero is a free and open-source computer Go program released on 25 October 2017. It is developed by Belgian programmer Gian-Carlo Pascutto, the author of chess engine Sjeng and Go engine Leela.

<span class="mw-page-title-main">AlphaZero</span> Game-playing artificial intelligence

AlphaZero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi and go. This algorithm uses an approach similar to AlphaGo Zero.

Elmo is a computer shogi evaluation function and book file (joseki) created by Makoto Takizawa (瀧澤誠). It is designed to be used with a third-party shogi alpha–beta search engine.

Deep reinforcement learning is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs and decide what actions to perform to optimize an objective. Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.

Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems.

Timothy P. Lillicrap is a Canadian neuroscientist and AI researcher, adjunct professor at University College London, and staff research scientist at Google DeepMind, where he has been involved in the AlphaGo and AlphaZero projects mastering the games of Go, Chess and Shogi. His research focuses on machine learning and statistics for optimal control and decision making, as well as using these mathematical frameworks to understand how the brain learns. He has developed algorithms and approaches for exploiting deep neural networks in the context of reinforcement learning, and new recurrent memory architectures for one-shot learning.

<span class="mw-page-title-main">MuZero</span> Game-playing artificial intelligence

MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules. Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its performance in Go, and improved on the state of the art in mastering a suite of 57 Atari games, a visually-complex domain.

The history of chess began nearly 1500 years ago, and over the past century and a half the game has changed drastically. No technology or strategy, however, has changed chess as much as the introduction of chess engines. Despite only coming into existence within the previous 70 years, the introduction of chess engines has molded and defined how top chess is played today.

Self-play is a technique for improving the performance of reinforcement learning agents. Intuitively, agents learn to improve their performance by playing "against themselves".

References

  1. 1 2 3 4 5 6 7 Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Fan, Hui; Sifre, Laurent; Driessche, George van den; Graepel, Thore; Hassabis, Demis (19 October 2017). "Mastering the game of Go without human knowledge" (PDF). Nature . 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN   0028-0836. PMID   29052630. S2CID   205261034. Archived (PDF) from the original on 18 July 2018. Retrieved 2 September 2019. Closed Access logo transparent.svg
  2. 1 2 3 4 5 Hassabis, Demis; Siver, David (18 October 2017). "AlphaGo Zero: Learning from scratch". DeepMind official website. Archived from the original on 19 October 2017. Retrieved 19 October 2017.
  3. "Google's New AlphaGo Breakthrough Could Take Algorithms Where No Humans Have Gone". Yahoo! Finance. 19 October 2017. Archived from the original on 19 October 2017. Retrieved 19 October 2017.
  4. Knapton, Sarah (18 October 2017). "AlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 days". The Telegraph. Archived from the original on 19 October 2017. Retrieved 19 October 2017.
  5. mnj12 (7 July 2021), mnj12/chessDeepLearning , retrieved 7 July 2021{{citation}}: CS1 maint: numeric names: authors list (link)
  6. "DeepMind AlphaGo Zero learns on its own without meatbag intervention". ZDNet. 19 October 2017. Archived from the original on 20 October 2017. Retrieved 20 October 2017.
  7. https://www.idi.ntnu.no/emner/it3105/materials/neural/silver-2017b.pdf
  8. 1 2 3 4 5 Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (5 December 2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv: 1712.01815 [cs.AI].
  9. Knapton, Sarah; Watson, Leon (6 December 2017). "Entire human chess knowledge learned and surpassed by DeepMind's AlphaZero in four hours". The Telegraph. Archived from the original on 2 December 2020. Retrieved 5 April 2018.
  10. 1 2 Greenemeier, Larry. "AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor". Scientific American. Archived from the original on 19 October 2017. Retrieved 20 October 2017.
  11. 1 2 "Computer Learns To Play Go At Superhuman Levels 'Without Human Knowledge'". NPR. 18 October 2017. Archived from the original on 20 October 2017. Retrieved 20 October 2017.
  12. "Google's New AlphaGo Breakthrough Could Take Algorithms Where No Humans Have Gone". Fortune. 19 October 2017. Archived from the original on 19 October 2017. Retrieved 20 October 2017.
  13. "This computer program can beat humans at Go—with no human instruction". Science | AAAS. 18 October 2017. Archived from the original on 2 February 2022. Retrieved 20 October 2017.
  14. Gibney, Elizabeth (18 October 2017). "Self-taught AI is best yet at strategy game Go". Nature News. doi:10.1038/nature.2017.22858. Archived from the original on 1 May 2020. Retrieved 10 May 2020.
  15. "The latest AI can work things out without being taught". The Economist. Archived from the original on 19 October 2017. Retrieved 20 October 2017.
  16. 1 2 Sample, Ian (18 October 2017). "'It's able to create knowledge itself': Google unveils AI that learns on its own". The Guardian. Archived from the original on 19 October 2017. Retrieved 20 October 2017.
  17. "'It's able to create knowledge itself': Google unveils AI that learns on its own". The Guardian. 18 October 2017. Archived from the original on 19 October 2017. Retrieved 26 December 2017.
  18. Knapton, Sarah (18 October 2017). "AlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 days". The Telegraph. Archived from the original on 15 December 2017. Retrieved 26 December 2017.
  19. "How Google's new AI can teach itself to beat you at the most complex games". Australian Broadcasting Corporation . 19 October 2017. Archived from the original on 20 October 2017. Retrieved 20 October 2017.
  20. "Go Players Excited About 'More Humanlike' AlphaGo Zero". Korea Bizwire . 19 October 2017. Archived from the original on 21 October 2017. Retrieved 21 October 2017.
  21. "New version of AlphaGo can master Weiqi without human help". China News Service . 19 October 2017. Archived from the original on 19 October 2017. Retrieved 21 October 2017.
  22. "【柯洁战败解密】AlphaGo Master最新架构和算法,谷歌云与TPU拆解" (in Chinese). Sohu. 24 May 2017. Archived from the original on 17 September 2017. Retrieved 1 June 2017.
  23. Hardware used during training may be substantially more powerful