Google-developed AI teaches itself to play retro games

Breakout success.

Scientists have built the first computer program that can teach itself a variety of tasks - even playing retro video games.


The program was built by British outfit DeepMind, a company founded in 2011 by ex-Bullfrog developer Demis Hassabis and bought up last year by Google for Ł400m.

Hassabis began his career aged 17 as a designer on Syndicate, before becoming lead programmer for Theme Park, working alongside Peter Molyneux.

His company's creation was able to learn 49 classic Atari games after being left to work out how, The Guardian reports.

The experiment differed from previous attempts as it didn't rely on random trial and error movements or any pre-programmed rules.

Instead, the program used visual inputs to watch frames of the game and determine how to react appropriately.

"It's a bit like a baby opening their eyes and seeing the world for the first time," Hassabis explained. "We've built algorithms that learn from the ground up."

Given two weeks to try out Breakout, for example, the program managed to figure out strategies that not only completed levels but beat them in the quickest times - by aiming the ball up to each level's top.

"One day machines will be capable of some form of creativity, but we're not there yet," Hassabis concluded. "But we're decades away from any sort of technology that we need to worry about."

Sometimes we include links to online retail stores. If you click on one and make a purchase we may receive a small commission. For more information, go here.

Jump to comments (24)

About the author

Tom Phillips

Tom Phillips

News Editor

Tom is Eurogamer's news editor. He writes lots of news, some of the puns and all the stealth Destiny articles.


You may also enjoy...

Comments (24)

Comments for this article are now closed. Thanks for taking part!

Hide low-scoring comments