AlphaGo

AlphaGo logo
AlphaGo logo

AlphaGo is a computer program developed by Google DeepMind in London to play the board game Go.[1] In October 2015, it became the first Computer Go program to beat a professional human Go player without handicaps on a full-sized 19×19 board.[2][3] In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicaps.[4] Although it lost to Lee Sedol in the fourth game, Lee resigned the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of beating Lee Sedol, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association.

AlphaGo's algorithm uses a Monte Carlo tree search to find its moves based on knowledge previously "learned" by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play.

History and competitions

Go is considered much more difficult for computers to win than other games such as chess, because its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as alpha–beta pruning, tree traversal and heuristic search.[2][5]

Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5-dan level,[6] and still could not beat a professional Go player without handicaps.[2][3][7] In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) two times at five and four stones handicap.[8] In 2013, Crazy Stone beat Yoshio Ishida (9p) at four-stones handicap.[9]

According to AlphaGo's David Silver, the AlphaGo research project was formed around 2014 to test how well a neural network using deep learning can compete at Go.[10] AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen,[11] AlphaGo running on a single computer won all but one.[12] In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs.[6]

Match against Fan Hui

In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui,[13] a 2-dan (out of 9 dan possible) professional, five to zero.[3][14] This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap.[15] The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature[6] describing the algorithms used.[3]

Match against Lee Sedol

AlphaGo played South Korean professional Go player Lee Sedol, ranked 9-dan, one of the best players at Go,[7] with five games taking place at the Four Seasons Hotel in Seoul, South Korea on 9, 10, 12, 13, and 15 March 2016,[16][17] which were video-streamed live.[18] Aja Huang, a DeepMind team member and amateur 6-dan Go player, placed stones on the Go board for AlphaGo, which ran through Google's cloud computing with its servers located in the United States.[19] The match used Chinese rules with a 7.5-point komi, and each side had two hours of thinking time plus three 60-second byoyomi periods.[20] The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match.[21] The Economist reported that it used 1,920 CPUs and 280 GPUs.[22]

At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world.[23] While there is no single official method of ranking in international Go, some sources ranked Lee Sedol as the fourth-best player in the world at the time.[24][25] AlphaGo was not specifically trained to face Lee.[26]

The first three games were won by AlphaGo following resignations by Lee Sedol.[27][28] However, Lee Sedol beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation.[29]

The prize was $1 million USD. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, including UNICEF.[30] Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win.[20]

On June 29th, at a presentation held at a University in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that it had rectified the problem that occurred during the 4th game of the match between AlphaGo and Lee Sedol, and that after move 78 (which was dubbed the "hand of God" by many professionals), it would play accurately and maintain Black's advantage, since before the error which resulted in the loss, AlphaGo was leading throughout the game and Lee's move was not credited as the one which won the game, but caused the program's computing powers to be diverted and confused. Aja Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee Sedol's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation..[31]

Future matches

AlphaGo's next games are planned to take place in early 2017.[32]

Hardware

An early version of AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resulting Elo ratings are listed below.[6] In the matches with more time per move higher ratings are achieved.

Configuration and performance
Configuration Search
threads
No. of CPUNo. of GPUElo rating
Single[6] p. 10-11 404812,181
Single404822,738
Single404842,850
Single404882,890
Distributed12428642,937
Distributed247641123,079
Distributed401,2021763,140
Distributed641,9202803,168

In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.[33][34]

Algorithm

As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.[2][6] A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks.[6]

The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[13] Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.[2] To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the March 2016 match against Lee, the resignation threshold was set to 20%.[35]

Style of play

Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[36] During AlphaGo's match against Lee Sedol, Korean commentators exclaimed the AI's playstyle greatly resembled that of the legendary player Lee Changho. This similarity can be attributed to the fact that like Lee Changho, AlphaGo's playstyle also strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[10]

Responses to 2016 victory against Lee Sedol

AI community

AlphaGo's March 2016 victory was a major milestone in artificial intelligence research.[37] Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.[37][38][39] Most experts thought a Go program as powerful as AlphaGo was at least five years away;[40] some experts thought that it would take at least another decade before computers would beat Go champions.[6][41][42] Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo.[37]

With games such as checkers (that has been "solved" by the Chinook draughts player team), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."[37]

When compared with Deep Blue or with Watson, AlphaGo's underlying algorithms are potentially more general-purpose, and may be evidence that the scientific community is making progress towards artificial general intelligence.[10][43] Some commentators believe AlphaGo's victory makes for a good opportunity for society to start discussing preparations for the possible future impact of machines with general purpose intelligence. (As noted by entrepreneur Guy Suter, AlphaGo itself only knows how to play Go, and doesn't possess general purpose intelligence: "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms"[37]) In March 2016, AI researcher Stuart Russell stated that "AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent," adding that "in order to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do."[44] Some scholars, such as Stephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",[45] and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration."[44] Computer scientist Richard Sutton "I don't think people should be scared... but I do think people should be paying attention."[46]

Go community

Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide.[37][47] Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight:[41] "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[37] AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match[48] where a computer had beat a Go professional for the first time ever without the advantage of a handicap.[49] The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea’s biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol."[50] The Korea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress.[51]

China's Ke Jie, an 18-year-old generally recognized as the world's best Go player,[24][52] initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".[52] As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches,[53] but regaining confidence after AlphaGo displayed flaws in the fourth match.[54]

Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.[49]

After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory."[55] Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless."[37] He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind".[26][45] Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do."[45] Lee called his game four victory a "priceless win that I (would) not exchange for anything."[26]

Similar systems

Facebook has also been working on their own Go-playing system darkforest, also based on combining machine learning and tree search.[36][56] Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player.[57] darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen.[58]

DeepZenGo, a system developed with support from video-sharing website Dwango and the University of Tokyo, lost 2-1 in November 2016 to Go master Cho Chikun, who holds the record for the largest number of Go title wins in Japan.[59][60]

Example game

AlphaGo (black) v. Fan Hui, Game 4 (8 October 2015), AlphaGo won by resignation.[6]

First 99 moves (96 at 10)
Moves 100-165.

See also

References

  1. "Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol". BBC News. Retrieved 17 March 2016.
  2. 1 2 3 4 5 "Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning". Google Research Blog. 27 January 2016.
  3. 1 2 3 4 "Google achieves AI 'breakthrough' by beating Go champion". BBC News. 27 January 2016.
  4. "Match 1 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo". 8 March 2016.
  5. Schraudolph, Nicol N.; Terrence, Peter Dayan; Sejnowski, J., Temporal Difference Learning of Position Evaluation in the Game of Go (PDF)
  6. 1 2 3 4 5 6 7 8 9 Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda (2016). "Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. doi:10.1038/nature16961. PMID 26819042.
  7. 1 2 "Computer scores big win against humans in ancient game of Go". CNN. 28 January 2016. Retrieved 28 January 2016.
  8. "Zen computer Go program beats Takemiya Masaki with just 4 stones!". Go Game Guru. Retrieved 28 January 2016.
  9. "「アマ六段の力。天才かも」囲碁棋士、コンピューターに敗れる 初の公式戦". MSN Sankei News. Retrieved 27 March 2013.
  10. 1 2 3 John Riberio (14 March 2016). "AlphaGo's unusual moves prove its AI prowess, experts say". PC World. Retrieved 18 March 2016.
  11. "Artificial intelligence breakthrough as Google's software beats grandmaster of Go, the 'most complex game ever devised'". Daily Mail. 27 January 2016. Retrieved 29 January 2016.
  12. "Google AlphaGo AI clean sweeps European Go champion". ZDNet. 28 January 2016. Retrieved 28 January 2016.
  13. 1 2 Metz, Cade (27 January 2016). "In Major AI Breakthrough, Google System Secretly Beats Top Player at the Ancient Game of Go". WIRED. Retrieved 1 February 2016.
  14. "Special Computer Go insert covering the AlphaGo v Fan Hui match" (PDF). British Go Journal. Retrieved 1 February 2016.
  15. "Première défaite d'un professionnel du go contre une intelligence artificielle". Le Monde (in French). 27 January 2016.
  16. "Google's AI AlphaGo to take on world No 1 Lee Sedol in live broadcast". The Guardian. 5 February 2016. Retrieved 15 February 2016.
  17. "Google DeepMind is going to take on the world's best Go player in a luxury 5-star hotel in South Korea". Business Insider. 22 February 2016. Retrieved 23 February 2016.
  18. Novet, Jordan (4 February 2016). "YouTube will livestream Google's AI playing Go superstar Lee Sedol in March". VentureBeat. Retrieved 7 February 2016.
  19. "李世乭:即使Alpha Go得到升级也一样能赢" (in Chinese). JoongAng Ilbo. 23 February 2016. Retrieved 24 February 2016.
  20. 1 2 "이세돌 vs 알파고, '구글 딥마인드 챌린지 매치' 기자회견 열려" (in Korean). Korea Baduk Association. 22 February 2016. Retrieved 22 February 2016.
  21. Demis Hassabis [demishassabis] (11 March 2016). "We are using roughly same amount of compute power as in Fan Hui match: distributing search over further machines has diminishing returns" (Tweet). Retrieved 14 March 2016 via Twitter.
  22. "Showdown". The Economist. Retrieved 19 November 2016.
  23. Steven Borowiec (9 March 2016). "Google's AI machine v world champion of 'Go': everything you need to know". The Guardian. Retrieved 15 March 2016.
  24. 1 2 Rémi Coulom. "Rating List of 2016-01-01". Archived from the original on 18 March 2016. Retrieved 18 March 2016.
  25. "Korean Go master proves human intuition still powerful in Go". The Korean Herald/ANN. 14 March 2016. Retrieved 15 March 2016.
  26. 1 2 3 Yoon Sung-won (14 March 2016). "Lee Se-dol shows AlphaGo beatable". The Korea Times. Retrieved 15 March 2016.
  27. "Google's AI beats world Go champion in first of five matches - BBC News". BBC Online. Retrieved 9 March 2016.
  28. "Google AI wins second Go game against world champion - BBC News". BBC Online. Retrieved 10 March 2016.
  29. "Google DeepMind AI wins final Go match for 4-1 series win". Engadget. Retrieved 15 March 2016.
  30. "Human champion certain he'll beat AI at ancient Chinese game". AP News. 22 February 2016. Retrieved 22 February 2016.
  31. "黄士杰:AlphaGo李世石人机大战第四局问题已解决date=8 July 2016" (in Korean). Retrieved 8 July 2016.
  32. Demis Hassabis (7 November 2016). "Hassabis's message". Demis Hassabis's Twitter account. Retrieved 21 November 2016.
  33. McMillan, Robert (18 May 2016). "Google Isn't Playing Games With New Chip". Wall Street Journal. Retrieved 26 June 2016.
  34. Jouppi, Norm (May 18, 2016). "Google supercharges machine learning tasks with TPU custom chip". Google Cloud Platform Blog. Google. Retrieved 2016-06-26.
  35. Cade Metz (13 March 2016). "Go Grandmaster Lee Sedol Grabs Consolation Win Against Google's AI". Wired News. Retrieved 29 March 2016.
  36. 1 2 Gibney, Elizabeth (27 January 2016). "Google AI algorithm masters ancient game of Go". Nature News & Comment. Retrieved 3 February 2016.
  37. 1 2 3 4 5 6 7 8 Steven Borowiec; Tracey Lien (12 March 2016). "AlphaGo beats human Go champ in milestone for artificial intelligence". Los Angeles Times. Retrieved 13 March 2016.
  38. Connor, Steve (27 January 2016). "A computer has beaten a professional at the world's most complex board game". The Independent. Retrieved 28 January 2016.
  39. "Google's AI beats human champion at Go". CBC News. 27 January 2016. Retrieved 28 January 2016.
  40. Dave Gershgorn (12 March 2016). "GOOGLE'S ALPHAGO BEATS WORLD CHAMPION IN THIRD MATCH TO WIN ENTIRE SERIES". Popular Science. Retrieved 13 March 2016.
  41. 1 2 "Google DeepMind computer AlphaGo sweeps human champ in Go matches". CBC News. Associated Press. 12 March 2016. Retrieved 13 March 2016.
  42. Sofia Yan (12 March 2016). "A Google computer victorious over the world's 'Go' champion". CNN Money. Retrieved 13 March 2016.
  43. "AlphaGo: Google's artificial intelligence to take on world champion of ancient Chinese board game". Australian Broadcasting Corporation. 8 March 2016. Retrieved 13 March 2016.
  44. 1 2 Mariëtte Le Roux (12 March 2016). "Rise of the Machines: Keep an eye on AI, experts warn". Phys.org. Retrieved 13 March 2016.
  45. 1 2 3 Mariëtte Le Roux; Pascale Mollard (8 March 2016). "Game over? New AI challenge to human smarts (Update)". phys.org. Retrieved 13 March 2016.
  46. Tanya Lewis (11 March 2016). "An AI expert says Google's Go-playing program is missing 1 key feature of human intelligence". Business Insider. Retrieved 13 March 2016.
  47. CHOE SANG-HUN (16 March 2016). "Google's Computer Program Beats Lee Se-dol in Go Tournament". New York Times. Retrieved 18 March 2016. More than 100 million people watched the AlphaGo-Lee matches, Mr. Hassabis said.
  48. John Ribeiro (12 March 2016). "Google's AlphaGo AI program strong but not perfect, says defeated South Korean Go player". PC World. Retrieved 13 March 2016.
  49. 1 2 Gibney, Elizabeth (2016). "Go players react to computer defeat". Nature. doi:10.1038/nature.2016.19255.
  50. "How victory for Google's Go AI is stoking fear in South Korea". New Scientist. 15 March 2016. Retrieved 18 March 2016.
  51. JEE HEUN KAHNG; SE YOUNG LEE (15 March 2016). "Google artificial intelligence program beats S. Korean Go pro with 4-1 score". Reuters. Retrieved 18 March 2016.
  52. 1 2 Neil Connor (11 March 2016). "Google AlphaGo 'can't beat me' says China Go grandmaster". The Telegraph (UK). Retrieved 13 March 2016.
  53. "Chinese Go master Ke Jie says he could lose to AlphaGo : The DONG-A ILBO". Retrieved 17 March 2016.
  54. http://m.hankooki.com/m_sp_view.php?WM=sp&FILE_NO=c3AyMDE2MDMxNDE4MDIzMDEzNjU3MC5odG0=&ref=search.naver.com "...if today's performance was its true capability, then it doesn't deserve to play against me."
  55. CHOE SANG-HUN (15 March 2016). "In Seoul, Go Games Spark Interest (and Concern) About Artificial Intelligence". New York Times. Retrieved 18 March 2016.
  56. Tian, Yuandong; Zhu, Yan (2015). "Better Computer Go Player with Neural Network and Long-term Prediction". arXiv:1511.06410v1Freely accessible [cs.LG].
  57. HAL 90210 (28 January 2016). "No Go: Facebook fails to spoil Google's big AI day". The Guardian. ISSN 0261-3077. Retrieved 1 February 2016.
  58. "Strachey Lecture - Dr Demis Hassabis". The New Livestream. Retrieved 17 March 2016.
  59. "Go master Cho wins best-of-three series against Japan-made AI". The Japan Times Online. 24 November 2016. Retrieved 27 November 2016.
  60. "Humans strike back: Korean Go master bests AI in board game bout". CNET. Retrieved 27 November 2016.

External links

This article is issued from Wikipedia - version of the 11/27/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.