FACEIT, a third-party server provider for several popular multiplayer games, is no stranger to toxicity.
Online gamers are notorious for their poor behavior, and third-party clients aren’t exempt from the toxicity, either. Companies have spent years attempting to fix this very issue, with solutions ranging from in-game reporting to requiring players to verify a phone number in order to play. While these stopgaps often slow toxic behavior, they always fail to eliminate it completely.
FACEIT has decided that enough is enough. The company has officially announced the formation of “Minerva,” an artificial intelligence program designed from the ground up to combat toxic player behavior on the FACEIT platform.
This isn’t just an imagined problem either, as toxic chat is one of the most rampant forms of harassment found online. According to a study by the Anti-Defamation League (ADL), over 74% of people have experienced some form of harassment online, so it only makes sense that FACEIT would want to be on the cutting edge of curbing these types of behavior. In fact, the ADL identified Counter-Strike: Global Offensive, PlayerUnknown’d Battlegrounds and Dota 2 as frequent offenders. Unfortunately for FACEIT, the company provides third-party services for all three games.
Like most online companies. FACEIT has struggled with how to control toxic behavior on its platform for some time now. According to Faceit, “toxicity expresses itself in many ways,” creating a difficult problem to nail down and solve with a single solution. Due to the difficulty in having a human employee police every match, FACEIT engineers decided to turn to machine learning to make some headway on the the rampant toxicity found in the games it hosts.
According to the release, the team wanted wanted to “identify these behaviors accurately and quickly enough to take precise and immediate action on them.” FACEIT then turned to Google Cloud and Jigsaw, creators of an anti-harassment AI, for help. Shortly afterwards, Minerva was born.
While Minerva is still in its initial testing stages, FACEIT says that it has already had a positive impact on the company’s platform. At the current time in its development, the AI currently only looks for verbal harassment that happens inside the in-game chat. After the match is over, Minerva reviews the logs for what “toxicity” looks like.
Interestingly, FACEIT also says that Minerva can distinguish between banter and actual toxicity, meaning that Minerva has the capability of reading the context of the conversation. If and when Minerva detects what it believes to be abuse, the AI then issues a notification to the offending player after the game.
FACEIT also says that repeat offenders will receive harsher punishments the more they are flagged by Minerva in order to better deter the behavior.
According to FACEIT, Minerva issued 90,000 notifications and 20,0000 bans due to players’ behavior in FACEIT server chat logs. As if that number wasn’t staggering enough, the server provider also reported a 20.13% total decrease in toxic messages since Minerva was introduced.
As of right now, Minerva only alerts an offending user after a match has ended. FACEIT is currently working towards advancing the AI to actively monitor players during games and alert a user immediately when it detects toxic behavior.
Although Minerva is currently only watching chat logs, FACEIT says that it won’t stop there. While the team wasn’t specific regarding what else they were teaching Minerva to do, they promised to “announce new systems” in the coming weeks.