[新聞] Google AI史上首次戰勝人類圍棋選手消失

看板Gossiping作者時間8年前 (2016/01/28 18:47), 編輯推噓79(81228)
留言111則, 89人參與, 最新討論串1/4 (看更多)
原文網址:http://goo.gl/zQgUVx (Nature) Google AI algorithm masters ancient game of Go Google AI 精通了古老的圍棋遊戲 Deep-learning software defeats human professional for first time. 深度學習軟體史上首次擊敗人類職業選手 A computer has beaten a human professional for the first time at Go — an ancient board game that has long been viewed as one of the greatest challenges for artificial intelligence (AI). 長久以來,圍棋被視為AI領域最艱難的挑戰之一, 如今電腦首次成功擊敗了人類職業選手 The best human players of chess, draughts and backgammon have all been outplayed by computers. But a hefty handicap was needed for computers to win at Go. Now Google’s London-based AI company, DeepMind, claims that its machine has mastered the game. 在西洋棋、西洋跳棋與雙陸棋領域,人類早已不敵電腦。 但電腦要在圍棋獲得勝利有很大的障礙。 而現在,Google在2014年買下的倫敦DeepMind 公司,聲稱已精通這個遊戲。 DeepMind’s program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions, the firm reveals in research published in Nature on 27 January1. It also defeated its silicon-based rivals, winning 99.8% of games against the current best programs. The program has yet to play the Go equivalent of a world champion, but a match against South Korean professional Lee Sedol, considered by many to be the world’s strongest player, is scheduled for March. “We’re pretty confident,” says DeepMind co-founder Demis Hassabis. DeepMind 所開發的軟體AlphaGo在標準比賽規則中以 五戰五勝的成績擊敗歐洲圍棋冠軍 Fan Hui,該公司在1月27日出版的《Nature》公開這項消息。 而對上目前頂級的圍棋程式時,則有99.8%的勝率。 這套程式目前還沒有跟世界冠軍對弈過, 但今年三月,它將要與南韓職業選手Lee Sedol對弈, 該位選手被許多人認為是世界最強。 DeepMind的共同創辦人Demis Hassabis對此表示「我們相當有信心」。 “This is a really big result, it’s huge,” says R幦i Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away. 「這結果超猛、超狂」R幦i Coulom這樣說到,他是住在法國Lille的 一位圍棋遊戲程式設計師,他曾寫過一款叫做Crazy Stone的圍棋遊戲程式。 他原以為電腦要主宰圍棋還要數十年。 The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games2. IBM的西洋棋電腦:深藍(Deep Blue)在1997年擊敗西洋棋大師Garry Kasparov, 但深藍是刻意為了西洋棋而寫出來的程式。然而AlphaGo並非為圍棋而設計, 它是利用一套綜合分析演算法在遊戲中學習一款遊戲的規則, 它也用這種方式學了49種不同遊戲(Arcade Games)的玩法。 This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.” Examples are using medical images to make diagnoses or treatment plans, and improving climate-change models. 這代表其他類似領域的──需要複雜認知、長期規劃與決策的AI, 都可以套用此項技術,Hassabis說到:「世界上許多議題都跟此有關」, 舉例來說,利用醫療影像來診斷、決定療法,以及建造氣候變遷模型。 In China, Japan and South Korea, Go is hugely popular and is even played by celebrity professionals. But the game has long interested AI researchers because of its complexity. The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 199 grid. But the average 150-move game contains more possible board configurations — 10^170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move. 在中國、日本以及南韓,圍棋有很多知名的選手,是相當熱門的遊戲。 而由於這遊戲相當複雜,長期以來一直是AI研究者的興趣。 從規則來說相對的簡單:在黑棋與白棋的包圍戰中,取得最多領土的人獲勝。 但一場圍棋遊戲中,平均有150回合, 這當中包含了 10^170 種可能性,遠比已知宇宙的原子總數還多, 因此它無法用窮舉的方式找出最佳解。 Abstract strategy 抽象謀略 Chess is less complex than Go, but it still has too many possible configurations to solve by brute force alone. Instead, programs cut down their searches by looking a few turns ahead and judging which player would have the upper hand. In Go, recognizing winning and losing positions is much harder: stones have equal values and can have subtle impacts far across the board. 西洋棋較圍棋相對簡單,但要用暴力算法解決的話,可能性還是太多, 因此,程式只計算未來幾個回合的下法,來判斷誰將會佔上風。 但在圍棋中要判斷誰佔上風困難許多,黑棋跟白棋顆顆等值, 細微的轉變都有可能會影響全局。 To interpret Go boards and to learn the best possible moves, the AlphaGo program applied deep learning in neural networks — brain-inspired programs in which connections between layers of simulated neurons are strengthened through examples and experience. It first studied 30 million positions from expert games, gleaning abstract information on the state of play from board data, much as other programmes categorize images from pixels. Then it played against itself across 50 computers, improving with each iteration, a technique known as reinforcement learning. 為了要在遊戲進行中找出可能的最佳解,AlphaGo採用了 深度學習類神經網路 ,這項技術以人腦結構啟發, 它可以模擬多層次神經元網路, *這些神經元在經過經驗學習後會產生變化,當下次碰到類似的問題時, 可以快速產生解決該問題的模糊反應; 可以理解成──用了XX很多次就會變成XX的形狀這種感覺。 *這邊是譯者個人的註解,讓各位有個簡單的概念 它首先從專業比賽中的3千萬個回合,萃取出遊戲全局狀態的抽象資訊, 類似其他圍棋程式對棋譜進行分類,接著再用50台平行連線電腦 跟它自己對弈,每個回合它都會不斷進化,這項技術被稱作強化學習。 “Deep learning is killing every problem in AI.” 「AI利用深度學習解決所有問題」 The software was already competitive with the leading commercial Go programs, which select the best move by scanning a sample of simulated future games. DeepMind then combined this search approach with the ability to pick moves and interpret Go boards — giving AlphaGo a better idea of which strategies are likely to be successful. The technique is “phenomenal”, says Jonathan Schaeffer, a computer scientist at the University of Alberta in Edmonton, Canada, whose software Chinook solved3 draughts in 2007. Rather than follow the trend of the past 30 years of trying to crack games using computing power, DeepMind has reverted to mimicking human-like knowledge, albeit by training, rather than by being programmed, he says. The feat also shows the power of deep learning, which is going from success to success, says Coulom. “Deep learning is killing every problem in AI.” 現存的商用圍棋程式已經相當有競爭力,它們藉由掃描棋譜, 模擬未來幾個回合來找出最佳解。 而DeepMind還結合了綜觀當下局勢的能力, 讓AlphaGo更容易找出最有可能成功的戰略。 Alberta in Edmonton 大學的電腦科學家Jonathan Schaeffer說: 「這項技術『太神了』」,這人在2007年設計出西洋跳棋的必勝程式。 他認為這跳脫了過去30年利用計算能量暴力破解的方式, 回歸模仿人類的取得知識的方式:藉由經驗學習,而非預先寫好的程式劇本。 Coulom則認為,這也展現出了深度學習的力量, 是一項戰無不勝的技術──「AI利用深度學習解決所有問題」 AlphaGo plays in a human way, says Fan. “If no one told me, maybe I would think the player was a little strange, but a very strong player, a real person.” The program seems to have developed a conservative (rather than aggressive) style, adds Toby Manning, a lifelong Go player who refereed the match. AlphaGo以人類的方式在玩遊戲,Fan(被幹掉的歐洲冠軍)說到: 「如果沒人跟我說,我會以為他是個有點怪但很強的人類選手」。 一位從小就玩圍棋的玩家Toby Manning對這場圍棋比賽評論到: 「這套程式看起來有自己一套穩健而非侵略的風格」 Google’s rival firm Facebook has also been working on software that uses machine learning to play Go. Its program, called darkforest, is still behind commercial state-of-the-art Go AI systems, according to a November preprint4. Google的對手公司Facebook也著手在機器學習的圍棋軟體上。 該公司的程式叫做 黑森林/darkforest,但根據2015年 11月的資料顯示, 它仍不及市面上頂級的圍棋AI Hassabis says that many challenges remain in DeepMind’s goal of developing a generalized AI system. In particular, its programs cannot yet usefully transfer their learning about one system — such as Go — to new tasks; a feat that humans perform seamlessly. “We’ve no idea how to do that. Not yet,” Hassabis says. Hassabis提到,DeepMind仍有相當多的挑戰,他們的目標是開發出 通用/全用途AI。舉例來說,他們的程式還無法將一個領域上學習到的經驗 ──例如圍棋,套用到另一個領域上,這對人類來說輕而易舉。 Hassabis說:「我們對此毫無頭緒,但總有一天…」 Go players will be keen to use the software to improve their game, says Manning, although Hassabis says that DeepMind has yet to decide whether it will make a commercial version. Manning認為圍棋玩家可以用這套軟體鞭策自己, 但Hassabis提到DeepMind還沒決定這套軟體是否要商業化。 AlphaGo hasn’t killed the joy of the game, Manning adds. Strap lines boasting that Go is a game that computers can’t win will have to be changed, he says. “But just because some software has got to a strength that I can only dream of, it’s not going to stop me playing.” Manning 補充到AlphaGo不會抹殺遊戲的樂趣, 它只是改變了電腦贏不了圍棋這件事,「不過是個軟體, 達到了我夢想成為的強度,我還是會繼續玩圍棋」 -- Sent from my Windows -- ※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 122.146.86.153 ※ 文章網址: https://www.ptt.cc/bbs/Gossiping/M.1453978052.A.BAB.html

01/28 18:48, , 1F
拿幾次得分王就跩上天了
01/28 18:48, 1F

01/28 18:48, , 2F
完了 魔鬼終結者要誕生了
01/28 18:48, 2F

01/28 18:49, , 3F
沒屁用
01/28 18:49, 3F

01/28 18:49, , 4F
超狂
01/28 18:49, 4F

01/28 18:50, , 5F
自己跟自己對弈www
01/28 18:50, 5F

01/28 18:50, , 6F
R幦i
01/28 18:50, 6F

01/28 18:50, , 7F
贏亞洲前幾名再來講 歐美圍棋水準就像亞洲西洋棋水準一樣
01/28 18:50, 7F

01/28 18:50, , 8F
用了XX很多次就會變成XX的形狀...你故意這樣翻的吧 XDD
01/28 18:50, 8F

01/28 18:51, , 9F
不過是個軟體XDDD
01/28 18:51, 9F

01/28 18:51, , 10F
內文要看 他說得很清楚了 這不是專為圍棋社季的AI
01/28 18:51, 10F

01/28 18:51, , 11F
不過感覺打敗人腦是遲早的事
01/28 18:51, 11F

01/28 18:51, , 12F
他也可以駕馭其他的遊戲 很猛
01/28 18:51, 12F

01/28 18:52, , 13F
不過是個軟體
01/28 18:52, 13F

01/28 18:52, , 14F
人都會失誤 圍棋這種要精算幾目地的遊戲 AI能做得比人
01/28 18:52, 14F

01/28 18:52, , 15F
不過是個軟體
01/28 18:52, 15F

01/28 18:53, , 16F
人類可以出怪招, 而且層出不窮XDDD
01/28 18:53, 16F

01/28 18:53, , 17F
天網要誕生了
01/28 18:53, 17F

01/28 18:53, , 18F
好 也不用太意外....
01/28 18:53, 18F

01/28 18:53, , 19F
等打敗佐為再說吧
01/28 18:53, 19F

01/28 18:54, , 20F
歐洲冠軍? 還好不是日韓
01/28 18:54, 20F
還有 71 則推文
還有 69 段內文
01/28 21:52, , 92F
還有現在段位根本不代表棋力 很多退休八九段根本像菜一樣
01/28 21:52, 92F

01/28 22:10, , 93F
那些業餘的不往職業發展是因為賺不到錢嗎?
01/28 22:10, 93F

01/28 22:17, , 94F
有些因為年齡 有些是寧為雞頭 有些為了錢
01/28 22:17, 94F

01/28 22:18, , 95F
電腦要在圍棋獲得勝利有很大的障礙
01/28 22:18, 95F

01/28 22:19, , 96F
現在兩岸的業餘比賽都很好賺 低階職業只能靠教棋過活
01/28 22:19, 96F

01/28 22:30, , 97F
不弱吧
01/28 22:30, 97F


01/28 22:35, , 99F
打贏佐為再來說嘴吧
01/28 22:35, 99F

01/28 22:36, , 100F
歐洲圍棋冠軍來亞洲只有被電的份好嗎 打贏日韓再說
01/28 22:36, 100F

01/28 22:38, , 101F
有辦法到職業初段就已經很厲害了~
01/28 22:38, 101F

01/28 22:43, , 102F
不是專為圍棋寫的,超猛
01/28 22:43, 102F

01/28 23:25, , 103F
R幦i 是啥?
01/28 23:25, 103F

01/28 23:35, , 104F
沒摸到棋子選手實力會打折 先做一隻機器人出來吧
01/28 23:35, 104F

01/29 00:28, , 105F
推翻譯
01/29 00:28, 105F

01/29 07:29, , 106F
天網要出現了
01/29 07:29, 106F

01/29 08:52, , 107F
不過是個軟體 應該要上色 XDDDDD
01/29 08:52, 107F

01/29 09:49, , 108F
形狀XDDDD 結果是NN喔
01/29 09:49, 108F

01/29 09:50, , 109F
類神經網路的範例 網路很多
01/29 09:50, 109F

01/30 01:16, , 110F
贏歐洲人就跩上天了XDDDDD
01/30 01:16, 110F

02/04 15:34, , 111F
Machine要統治世界啦
02/04 15:34, 111F
文章代碼(AID): #1MgV74kh (Gossiping)
文章代碼(AID): #1MgV74kh (Gossiping)