A computer has just beaten the world champion of Go.
Google's DeepMind AlphaGo program beat South Korea's Lee Se-dol in the first of a series of games in Seoul, the BBC reports.
It's been described as a landmark victory for AI, which had struggled with the 3000-year-old Chinese game.
Go is considered more complex than chess, for example, because of the huge number of move options. As a result, an AI needs to be capable of human-like "intuition" to win.
The game involves black-and-white stones on a grid. The aim is to surround your opponent's pieces with your own, capturing territory. While the rules are simpler than chess, the player has a choice of 200 moves compared with about 20 in chess.
Google's AlphaGo was developed by British computer company DeepMind, which was bought by Google in 2014 for Ł400m. DeepMind was founded by Demis Hassabis, who began his career aged 17 as a designer on Syndicate, before becoming lead programmer for Theme Park, working alongside Peter Molyneux.
We reported on DeepMind last year after it built the first computer program able to teach itself a variety of tasks including playing retro video games.
AlphaGo works by studying common patterns repeated in past games. It then played itself millions and millions of times - each time getting slightly better by learning from its mistakes. This is called "machine learning".
Hassabis expressed his delight at AlphaGo's victory on Twitter:
Back in 2011 Eurogamer's Christian Donlan investigated The Path of Go, an Xbox Live Arcade version of traditional Go. It's well worth a read.
You can watch the match between AlphaGo and Lee Se-dol in the video below.