The League of Legends community has a bad reputation. It's well documented. But developer Riot says it's finally making a difference with the help of a machine-learning system.
The free-to-play MOBA (multiplayer online battle arena), which pits two teams of five players head-to-head in competitive multiplayer matches, is often criticised for the behaviour of its gargantuan, 67m strong player base, despite Riot's best efforts to improve the situation.
But in an article posted on Recode, lead game designer Jeffrey Lin (who was an experimental psychologist at Valve before joining Riot) said that as a direct result of changes made to its "governance systems", incidents of homophobia, sexism and racism in League of Legends have fallen to a combined two per cent of all games.
Verbal abuse, meanwhile, has dropped by more than 40 per cent, Lin said.
And 91.6 per cent of negative players "change their act" and never commit another offense after just one reported penalty.
Lin said it's taken quite a bit of research to work out how best to tackle bad player behaviour. For the past three years, Riot has studied its players and found 87 per cent of "online toxicity" came from "neutral and positive citizens just having a bad day here or there" - and did not originate from "persistently negative online citizens".
Riot also found forcing negative players to play each other only created a "downward spiral of escalated negative behaviours". So, it took a different approach.
The Tribunal automatically creates "case files" of behaviours players reported as unacceptable. Later in 2015 it will also create positive case files. These cases were public, and Riot found most players were against hate speech of all kind.
"It turns out that people just need a voice, a way to enact change," Lin says.
Riot developed a system that classifies words and phrases from negative to positive, (we reported on this in May), and it called in cross-discipline scientists to study online behaviour in League.
Riot now delivers feedback to players in near real-time, so every time a player reports another player in the game for a negative act, it informs a "machine-learning system". This works every time a player honours another in the game for a positive act, too.
"As soon as we detect these behaviours in-game, we can deliver the appropriate consequence, whether it is a customised penalty or an incentive," Lin explained.
"Critically, players in the society are driving the decisions behind the machine-learning feedback system - their votes determine what is considered acceptable behaviour in this online society."