Artificial Intelligence Beats Humans in Board Games, What’s Next?

When a person's intelligence is tested, there are exams. IQ tests, general knowledge quizzes, SATs. When artificial intelligence is tested, there are games. Checkers, chess, and Go.

But what happens when computer programs beat humans at all of those games? This is the question AI experts must ask after a Google-developed program called AlphaGo defeated a world champion Go player in two consecutive matches this week.

Long a yardstick for advances in AI, the era of board game testing has come to an end, said Murray Campbell, an IBM research scientist who was part of the team that developed Deep Blue, the first computer program to beat a world chess champion.

"Games are fun and they're easy to measure," said Campbell. "It's clear who won and who lost, and you always have the human benchmark," he said. "Can you do better than a human?"

For checkers, chess, and now Go, it seems the answer is now a resounding yes. Computer algorithms beat world champion-level human players in checkers and chess in the 1990s.

Go -- an ancient board game developed in China that is more complex than chess -- was seen as one of the last board game hurdles.

Board games, Campbell said, were perfect tests because they have clear rules and nothing is hidden from players. The real world is much messier and full of unknowns. What's next, it seems, is for AI to get messy.

With AI having conquered what experts call "complete information" games -- the kind in which players can see what their opponents are doing -- Tuomas Sandholm, a professor at Carnegie Mellon University who studies artificial intelligence, said the next step is "incomplete information games" like poker.

"The game of two-player-limit Texas hold 'em poker has almost been solved," said Sandholm, who described "solving" a game as finding the optimal way of...

Comments are closed.

Artificial Intelligence Beats Humans in Board Games, What’s Next?

When a person's intelligence is tested, there are exams. IQ tests, general knowledge quizzes, SATs. When artificial intelligence is tested, there are games. Checkers, chess, and Go.

But what happens when computer programs beat humans at all of those games? This is the question AI experts must ask after a Google-developed program called AlphaGo defeated a world champion Go player in two consecutive matches this week.

Long a yardstick for advances in AI, the era of board game testing has come to an end, said Murray Campbell, an IBM research scientist who was part of the team that developed Deep Blue, the first computer program to beat a world chess champion.

"Games are fun and they're easy to measure," said Campbell. "It's clear who won and who lost, and you always have the human benchmark," he said. "Can you do better than a human?"

For checkers, chess, and now Go, it seems the answer is now a resounding yes. Computer algorithms beat world champion-level human players in checkers and chess in the 1990s.

Go -- an ancient board game developed in China that is more complex than chess -- was seen as one of the last board game hurdles.

Board games, Campbell said, were perfect tests because they have clear rules and nothing is hidden from players. The real world is much messier and full of unknowns. What's next, it seems, is for AI to get messy.

With AI having conquered what experts call "complete information" games -- the kind in which players can see what their opponents are doing -- Tuomas Sandholm, a professor at Carnegie Mellon University who studies artificial intelligence, said the next step is "incomplete information games" like poker.

"The game of two-player-limit Texas hold 'em poker has almost been solved," said Sandholm, who described "solving" a game as finding the optimal way of...

Comments are closed.