AlphaGo: How Far will Artificial Intelligence Go?
AlphaGo vs Lee Sedol
From March 9 to March 15, there was a special five-game Go match in Seoul, South Korea. The two players were Lee Sedol and AlphaGo. Lee is a world famous Korean professional Go player, who ranked fourth before the match, and AlphaGo is a computer Go program developed by Google DeepMind.
The winner of the match was offered a $1 million prize. AlphaGo won all but the fourth game, and Lee received $170,000 after the match. Google DeepMind donated the prize money to charities, including UNICEF and other Go organizations, according to Associated Press.
This particular tournament was not a match for money, but a match between human beings and artificial intelligence.
Artificial intelligence, or AI, is not a new word. AI is the creation of computer software that is capable of intelligent behaviors. There have been many movies and TV shows that have featured AI, especially in the recent years, including The Terminator, Ex Machina, The Machine, Black Mirror, and Person of Interest.
A Century Match
Although some media called this Go match a “Century Match”, it is not the first match between human player and machine. In 1997, Deep Blue won against Garry Kasparov the world champion. Deep Blue, developed by IBM, has been considered a milestone in AI history.
Musk, the former investor of AlphaGo, tweeted on March 10, “Congrats to DeepMind! Many experts in the field thought AI was 10 years away from achieving this.”
The Go game is different from chess, which is more difficult for the computer to calculate because of the complexity and endless possibilities of how it could end.
“There are many more possibilities in Go. For example, there are 361 possibilities for the very first step in the Go game,” said Fei Wang, the Vice President of Chinese Association of Automation, in People Daily, “Next, the strategy and choice for every step is determined by players’ experiences and intuition to some extent. Therefore, it is hard for the computer to evaluate who is dominant or recessive in the game. The Go game is always called Project Apollo in the AI field.”
Is AI smarter than humans?
Strategy and evaluation functions contributed to AlphaGo’s victory. The Distributed AlphaGo was established on Monte Carlo tree search (MCTS) and two neural networks with on 1202 CPUs and 176 GPUs. MCTS is a heuristic search algorithm for some kinds of decision processes, most notably those employed in gameplay.
The developers of AlphaGo are not necessarily the masters of Go, but they are masters of training. They trained AlphaGo with 30 million different types of Go games to develop its neural networks, and then let it play with itself. Therefore, the machine can memorize the commands of a large number of game manuals and strategies in a short time.
“The strategy function does not know whether it is good or not, just like us,” said Jiaqi Liu, a researcher from the Institute of Automation of Chinese Academy of Sciences. “It tried to find a solution based on predicting the opponent, but the evaluation function will compute and came up with the best solution based on the whole game.”
“Computers are good at doing very complex computations,” said Janet Hung, Assistant Professor of Computer Science at Eastern New Mexico University. “If playing chess involves a lot of computations, then it will not surprise me to see it beat human brains.”
“Computers will remember all the moves made as long as the data exists, but human memories last for a certain period of time,” added Hung. “If properly programmed, computers seldom make mistakes. They do their job faithfully without being affected by environment or emotions.”
“There are many differences between the Deep Blue match and AlphaGo match other than game type and competition system,” said Yifei Yan, a former Chinese professional chess player. “Deep Blue used the method of exhaustions 20 years ago, but AlphaGo only used its standard version in the match with Lee Sedol.”
In the recent match, the standard (non-distributed) version of AlphaGo ran on a single machine, with 48 CPUs and eight GPUs. The machine cannot use the searching function in the whole network, but instead it can play with a human based on its self-learning knowledge and evaluation.
“This is a big progress in the AI game development,” said Yan. “This game surprised me because of the increasingly rapid development of artificial intelligence.”
“Computers cannot program themselves. They can only perform the tasks that are pre-programmed. To design a "thinking machine" is much more difficult than you can imagine. I won't worry about the world being ruled by computers.”