Algorithm Teaches Itself To Be a Better Gamer than You

Playing Breakout on an old Atari 2600 might not seem like cutting-edge computing, but it is when a computer algorithm learns on its own how to play that and other games as well as humans. In a paper published Thursday in the journal Nature, researchers from Google-owned DeepMind describe how their "deep Q-network," or DQN, did better than any previous machine-learning algorithms in mastering 43 of 49 classic Atari video games.

Starting with just the pixels on the game screen, a set of available actions and a reward system as an incentive for earning higher game scores, DQN was able to figure out such games as Breakout, Enduro racing, Pong, Space Invaders, River Raid and Q*bert. In half of the games, the algorithm "learned" how to play at "more than 75 percent of the level of a professional human player."

DeepMind, founded in 2011 and based in London, was acquired by Google in early 2014 (reports put the sales price at between $400 million and $650 million). The company researches machine learning and artificial intelligence, something with which Google has long been interested.

An Eye on Smarter Google Apps

Describing the new game-learning research Wednesday in a post on Google's Research Blog, DeepMind's Dharshan Kumaran and Demis Hassabis said DQN could help lead to smarter computing with practical, daily applications for people.

"This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks, taking in only raw pixels as inputs and transforming these into actions that can be executed in real-time," Kumaran and Hassabis said. "This kind of technology should help us build more useful products -- imagine if you could ask the Google app to complete any kind of complex task ('Okay, Google, plan me a great...

Comments are closed.