Facebook vows to fight Islamic terror — while ignoring the threat of white nationalism

Impact

In a policy blog post on Thursday, Facebook announced that it is using artificial intelligence and other methods to curb Islamic terror on the social network.

That distinction — Islamic extremism — is key. As Facebook noted, "We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates." So no white supremacist terror groups or non-Islamic groups — although "we expect to expand to other terrorist organizations in due course," the post says.

Facebook restated its stance against ISIS and Al Qaeda by offering transparency on how it handles content that may support terrorism, attempt to recruit from the platform or spread terrorist propaganda.

"We remove terrorists and posts that support terrorism whenever we become aware of them," Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, Facebook’s counterterrorism policy manager, wrote in the joint blog post. "When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities."

The post comes after United Kingdom Prime Minister Theresa May announced her goals to challenge internet and tech companies to play a more active role in counterterrorism. "We cannot allow this ideology the safe space it needs to breed," May said after the Manchester, England, concert bombing that claimed 22 lives. "Yet that is precisely what the internet — and the big companies that provide internet-based services — provide."

How Facebook uses AI to crack down on Islamic terror

According to Facebook, the company finds most of the removed terrorist content themselves. But the company says it will use artificial intelligence to stop the spread of such inappropriate content on the platform.

One way is image matching, which removes a photograph or video that matches a known terrorism photo or video that has been taken off the site before. The company is experimenting with language understanding, which uses AI to comprehend text that may promote terrorism. It's also identifying terrorism clusters and has improved identification of fake accounts.

Facebook also said how human moderators play a part. The company said it would to grow its community operations team by 3,000 over the next year to review flagged content, and that it has hired more than 150 counterterrorism experts who collectively speak nearly 30 languages. The company is also working with other tech companies, governments, NGOs and community groups.

"Our technology is going to continue to evolve just as we see the terror threat continue to evolve online," Bickert told the BBC. "Our solutions have to be very dynamic."

Will Facebook recognize other forms of extremism as well?

It'll be interesting to see how Facebook responds and adapts to other, more domestic terrorism as well — considering the seemingly endless spread of Nazism and white supremacist speech on Twitter, Facebook and around the web.

As Zak Cheney Rice noted in a March Mic story, "President Donald Trump and his supporters have made an aggressive display of painting Muslims — especially immigrants and refugees — as the greatest national security threat facing Americans today." But a "much older and more costly threat" is lurking closer to home.