Riot Games has shared an update on its ongoing efforts to combat voice and chat toxicity in its free-to-play shooter Valorant, pledging that harsher, more immediate punishments for abusers - and its previously announced voice recording moderation system - are on the way.
Riot initially outlined the areas it would be focusing on to combat unwanted player behaviour in Valorant - which included repeated AFK offences as well as those related to toxic comms use - in a blog post last year. The developer has now offered an update on its progress, highlighting some of the new steps it will be implementing in a bid to make for a more pleasant player experience in the near future.
Presently, Riot relies on a combination of player reports and automatic text detection to curb unwanted player behaviour, which it defines as insults, threats, harassment, or offensive language. These moderation methods are said to have resulted in 400,000 voice and text chat mutes, plus 40,000 game bans (implemented for "numerous, repeated instances of toxic communication" and ranging from a few days to permanent) this January alone.
Despite these efforts, Riot admits "the frequency with which players encounter harassment in our game hasn't meaningfully gone down". As such, it calls the work it's done so far "at best, foundational" and accepts "there's a ton more to build on top of it in 2022 and beyond."
To that end, the developer is pledging to make a number of changes to its existing moderation methods. For starters, it's exploring - as part of a Regional Test Pilot Program limited to Turkey at present - the creation of Player Support agents who'll oversee incoming reports strictly dedicated to player behaviour and take action based on established guidelines. If the test shows enough promise, Riot will consider rolling it out across other regions.
As for the measures it'll be implementing in the shorter term, Riot says now it's "more confident" that its automatic detections systems are functioning correctly, it will gradually begin increasing the severity and escalation of its penalties, which should result in "quicker treatment of bad actors." Additionally, it's looking to make changes to its real-time text moderation system so players using "zero tolerance" words in chat will be punished immediately - rather than other players having to endure their toxicity until after a game, as is the case at present.
As for voice chat abuse, which Riot notes is significantly harder to detect than text, the developer will be making improvements to its existing moderation tools, as well as rolling out the voice evaluation programme it announced last year. At the time, it said it was updating its privacy notice to allow it to record and evaluate voice comms when a report for disruptive behaviour is submitted - and this system will finally be introduced in "North America/English-only" later this year, before being implemented globally once the tech "is in a good place".
"Deterring and punishing toxic behavior in voice is a combined effort that includes Riot as a whole," it says, "and we are very much invested on making this a more enjoyable experience for everyone... Please continue to report toxic behaviour in the game; please utilise the Muted Words List if you encounter things you don't want to see; and please continue to leave us feedback about your experiences in-game and what you'd like to see. By doing that, you're helping us make Valorant a safer place to play, and for that, we're grateful."