If you sit down to play an old-school board game like chess this holiday season, it might be humbling to keep in mind just how bad you’d be against a computer. In fact, computers have shown they’re capable of taking humanity’s lunch money at board games for awhile now. Remember Deep Blue versus Gary Kasparov in 1997? The computer won. Or AlphaGo against Lee Sedol, in South Korea, at the game of Go, in 2016? Ditto.
Now, the same team that created that Go-playing bot is celebrating something more formidable: an artificial intelligence system that is capable of teaching itself—and winning at—three different games. The AI is one network, but works for multiple games; that generalizability makes it more impressive, as it might also be able to learn other similar games, too.
They call it AlphaZero, and it knows chess, shogi (which is known as Japanese chess), and Go, a complex board game where black and white stones face off on a large grid. All of these games fall into the category of “full information” or “perfect information” contests—each player can see the entire board and has access to the same info. That’s different from games like poker, for example, where you don’t know what cards an opponent is holding.
“AlphaZero just learns completely on its own, just by playing against itself,” says Julian Schrittwieser, a software engineer at DeepMind, which created it. “And we get a completely new view of the game that is not influenced by how humans traditionally play the game.” Schrittwieser is a co-author on a new study in Sciencedescribing AlphaZero, which was first announced late last year.
Since AlphaZero is “more general” than the AI that won at Go, in the sense that it can play multiple games, “it hints that we have a good chance to extend this to even more real-world problems that we might want to tackle later,” Schrittwieser says.