Facebook Says It's Finally Doing Something About Hate Speech on Its Platform

Impact

Facebook is putting its money where its mouth is.

The social media company and several other organizations devoted to quelling violent speech on the web are funneling more than 1 million euros into a new effort called the Online Civil Courage Initiative, which was announced in Berlin. The announcement comes as hate speech against refugees pouring into Europe is proliferating

"Hate speech has no place in our society — not even on the internet. Facebook is not a place for the dissemination of hate speech or incitement to violence," said Facebook COO Sheryl Sandberg in a statement on Monday. 

The new program will help several European nonprofits fund anti-hate speech marketing campaigns. The money will also fuel the development of best practices for preventing and dismantling prejudiced or violent posts on social media sites. It's not exactly clear how the development of standards will strengthen Facebook's overall anti-hate strategy, which for years has come across as lackluster.

Kimberly White/Getty Images

Facebook, Twitter, Reddit and others have long served as platforms where hate speech and bullying thrive. Beyond bullying, these networks have also been accused of providing a place for extremists to proselytize. Twitter is currently being sued for allegedly enabling terrorist attacks, because the Islamic State group, or ISIS, and other extremist organizations use the platform to elicit donations and recruit. By publicly committing to developing a solution for these very problems, Facebook will hopefully be able to avoid similar suits.

However, historically, the company hasn't always been proactive about censoring violent or offensive content.

Despite the company's terms of service, which state that users cannot "post content that: is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence," Facebook is not always amenable to removing such content. In 2013, after repealing a temporary ban on violent content, the company refused to take down a video of a man beheading a woman, despite requests for it to do so. Two years later, after many complaints about its policy toward violence, the social network began adding warning labels to videos that may be considered upsetting, according to the BBC. Still, some argued the measure didn't go far enough to protect users. 

Video is at the very heart of what makes tackling prejudice and violence on social media so difficult. A number of social media platforms, including Facebook and Twitter, are ramping up their video tools to make it even easier to make and post videos. For evidence, look to Twitter's investment in livestreaming app Periscope, which allows users to post live video directly to their feeds, or Facebook recently launching livestreaming capability, called live status updates, which the company unveiled at the end of 2015. The idea behind these new tools is to make video creation and live video streaming accessible to all users. 

Getty Images

While quick and easy video creation may help Facebook grow content on its platform, it could also inspire more violent or hateful video. Take for instance, the killing of two news reporters on-air in Virginia in 2015. That incident was filmed and uploaded to both Facebook and Twitter and quickly spread across feeds. While the video was only live on Twitter for eight minutes, according to the Wall Street Journal, it was still shared hundreds of times. As video gets easier to upload and more accessible, attention-hungry criminals may be increasingly drawn to these networks to post their killing sprees live.

In order to make social media feeds less prone to violence and hate speech, Facebook may have to take a more active role in limiting offensive content.