Scientists at Austin's University of Texas and Romania have created game-playing artificial intelligences that fooled a panel of judges into believing they were human based on their behaviour in Unreal Tournament 2004, Phys.org has reported.
The winners of 2K Games's Botprize - a competition to build the most human-like AI - at the IEEE Conference on Computational Intelligence and Games' were AIs UT^2 and MirrorBot. The former was created by University of Texas professor Risto Miikkulainen alongside doctoral students Jacob Schrum and Igor Karpov, while the latter was programmed by Romanian computer scientist Mihai Polceanu. Both parties split the $7000 prize.
"The idea is to evaluate how we can make game bots, which are non-player characters (NPCs) controlled by AI algorithms, appear as human as possible," explained Miikkulainen.
To test this, the bots faced off in a competitive mode comprised of half bots and half humans. In addition to the usual arsenal of weapons, each player had a "judging gun" that could be used to tag other opponents as either human or robot. The bot that's tagged human the most by the judges was named the winner.
UT^2 and MirrorBot tied for top honours, each achieving a humanness rating of 52 per cent on the Turing test, which is remarkably high as actual humans average around 42 per cent.
The test was founded 100 years ago by scientist Alan Turing who believed that since we'll never be able to truly understand a machine's hypothetical consciousness, the best way to gauge its sentience is to see if it can fool us into believing it's human.
"When this 'Turing test for game bots' competition was started, the goal was 50 per cent humanness," said Miikkulainen. "It took us five years to get there, but that level was finally reached last week."
So, what does it take to appear human? Well, as Alexander Pope once said, "to err is human." Thus it should come as no surprise that the most human-like bots were the ones that made mistakes.
"People tend to tenaciously pursue specific opponents without regard for optimality," said Schrum. "When humans have a grudge, they'll chase after an enemy even when it's not in their interests. We can mimic that behaviour."
Some of the bots' behaviour is dictated on what it observes in humans, but its actual battle patterns are developed through a process called Neuroevolution wherein AI neural networks adapt to survival-of-the-fittest gauntlets based on biological evolution. Thus, those emulating more desirable behaviours survive while the others are tossed aside and replaced by copies of more fit ones and "offspring" created by random mutations of the survivors.
"A great deal of the challenge is in defining what 'human-like' is, and then setting constraints upon the neural networks so that they evolve toward that behaviour," explained Schrum.
"If we just set the goal as eliminating one's enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot's aim, such that rapid movements and long distances decrease accuracy. By evolving for good performance under such behavioural constraints, the bot's skill is optimised within human limitations, resulting in behaviour that is good but still human-like."
Okay, so this is pretty freaky stuff. But I say we keep our prejudices aside and enjoy a nice, friendly game of Halo with our new Cylon pals, eh?