Google-developed AI teaches itself to play retro games
Scientists have built the first computer program that can teach itself a variety of tasks - even playing retro video games.
The program was built by British outfit DeepMind, a company founded in 2011 by ex-Bullfrog developer Demis Hassabis and bought up last year by Google for £400m.
Hassabis began his career aged 17 as a designer on Syndicate, before becoming lead programmer for Theme Park, working alongside Peter Molyneux.
His company's creation was able to learn 49 classic Atari games after being left to work out how, The Guardian reports.
The experiment differed from previous attempts as it didn't rely on random trial and error movements or any pre-programmed rules.
Instead, the program used visual inputs to watch frames of the game and determine how to react appropriately.
"It's a bit like a baby opening their eyes and seeing the world for the first time," Hassabis explained. "We've built algorithms that learn from the ground up."
Given two weeks to try out Breakout, for example, the program managed to figure out strategies that not only completed levels but beat them in the quickest times - by aiming the ball up to each level's top.
"One day machines will be capable of some form of creativity, but we're not there yet," Hassabis concluded. "But we're decades away from any sort of technology that we need to worry about."