The World's Most Popular Video Game Fights Racist Harassment With Artificial Intelligence

Decades ago, in the early days of gaming, the audience was primarily male. Today, that's changed: Adult women are now gaming's biggest demographic, and more than ever, the community is talking about diversifying gaming culture and games themselves. But like a bad hangover, sexism, bigotry and harassment still cling to the industry, and it can be a toxic environment for women, LGBT people and people of color.

League of Legends is the world's most played game, with 67 million monthly players. For some gamers, it's actually a professional sport. So the team behind it felt obligated to purge harassment from its system.

"As we spend more and more of our time online, we need to acknowledge that online harassment and toxicity is not an impossible problem, and that it is a problem worth spending time on," Riot Games' lead designer Jeffrey Lin writes about his work on League of Legends in an op-ed for Re/code.

One solution they didn't want was to eliminate anonymity. Yes, you can incentivize people to be well-behaved by connecting their behaviors to their names and faces. But there are also perfectly valid reasons not to eliminate anonymity — just look at the backlash that occurred when Facebook's real-names policy forced drag performers and LGBT communities to out themselves or risk deleting their accounts.

Instead, League of Legends turned to artificial intelligence. Or, more accurately, "machine-learning," a rising field in AI that studies human behavior and produces predictions and insights.

First, they built a system called the Tribunal, a public case log of files where players could review reported instances of racism, sexism and homophobia, then vote on whether or not they warranted action. After 100 million votes were cast, the team had a usable database of what their community considers an abusive behavior. Then, they turned over that knowledge to their machine-learning algorithm and set it to work dealing with instances of abuse.

As Lin puts it:

In League of Legends, we're now able to deliver feedback to players in near real-time. Every single time a player "reports" another player in the game for a negative act, it informs the machine-learning system. Every time a player "honors" another player in the game for a positive act, it also trains the machine-learning system. As soon as we detect these behaviors in-game, we can deliver the appropriate consequence, whether it is a customized penalty or an incentive. Critically, players in the society are driving the decisions behind the machine-learning feedback system — their votes determine what is considered acceptable behavior in this online society.

The results were astounding: Verbal abuse has fallen by a dramatic 40% since the program began, and only 2% of all League of Legends games have even a single reported instance of flagged vocabulary or rhetoric — a startling transformation from a community that notoriously uses homophobic slurs like "fag" as default insults to verbally bludgeon competitors.

LIONEL BONAVENTURE/Getty Images

And in building their database, they learned a boatload about how harassment works in gaming. They found that 87% of all online abusers are usually people with positive ratings in the community who are "just having a bad day." After they're reported just once, 91.6% of those users never return to those behaviors.

Lin hopes that these techniques and systems can be exported to other parts of the Web that might desperately need a little assistance in managing trolls and abusers.

"As we collaborate with those outside of games, we are realizing that the concepts we're using in games can apply in any online context," Lin writes. "We are at a pivotal point in the timeline of online platforms and societies, and it is time to make a difference."