Inside Facebook's abuse department, where humans and machines team up to curb harassment

Impact

Shahana Hanif, 25, has built her career on Facebook. As an activist and community organizer, she sees the social network as an essential tool for shifting oppressive narratives and organizing communities. 

Facebook is also the place where Hanif has been harassed, threatened and stalked. 

Years ago, Hanif began to notice Facebook pages impersonating her. The fake profiles looked convincing — they used photos of Hanif and her 19-year-old sister that were ripped from their public profiles, some using parts of their names — but were captioned with "violent, disgustingly sexual" comments about Hanif. Photos of erect penises filled the comments.

Around May or June, she began the arduous process of repeatedly reporting an impersonating profile; Facebook forbids impostor accounts. But it wasn't until she wrote a fifth or sixth public Facebook status in August urging her friends to report the page that it was taken down. 

Days later, Hanif discovered the strategy had backfired. Her own page was removed — someone had reported it for impersonation. Meanwhile, another obscene page had popped up.

Screenshots provided by Shahana Hanif

At Facebook, humans work with an algorithm to curb abuse. 

Hanif isn't alone in her war against trolls. Facebook has so many abuse reports, human employees can't keep up. That's why Facebook relies on an algorithm to assist the community operations team.

If you want to report abuse or anything else that violates Facebook's community standards, your options are limited to Facebook's online report link. Facebook has teams that review these reports 24 hours a day, seven days a week, reviewing a "mass majority" of reports within 24 hours, according to a Facebook representative.

Facebook's abuse algorithm assists the humans by recognizing and filtering duplicate reports in order to more efficiently flag violations such as nudity and pornography, the representative explained. If Facebook receives 1,000 reports on the same thing, the algorithm will streamline the process so humans aren't taking action on all 1,000.

With 2 billion users and some tens of thousands of Facebook employees and what one can imagine is a hell of a lot of reported harassment, it makes sense Facebook would lean on automation when necessary. 

What the humans do is "contextual," the representative said. "Determining whether a particular comment is hateful or bullying. That's why we have real people looking at these things." The representative emphasized that real people are looking at these reports and responding. If you get a form response, it's not computer-generated: A human reviewed it and decided to send it along.

Hanif has read these generic messages "over and over." She wonders why there isn't a more human way to reach out to Facebook in the event of continuous harassment. "In a serious case I should be able to talk to someone on the phone or have a contact personnel, something tangible," she said. ("We make reporting really easy," Facebook told Mic.)

Hanif believes Facebook refused to take action on her reports because the impostor page looked so convincing. "If a page has over 4,000 friends and a bulk of pictures, it's hard to take that account down," Hanif said by phone. "It seems like the person is real."

Mic/SpeedKingz/Shutterstock

"They just determined because of their poor algorithm that I was pretending to be someone else — even though I've had an account [for] eight years, my real work is there, I'm the real Shahana Hanif," Hanif said.

Computers may be great at processing loads of information quickly, but they have trouble detecting authenticity and executing actions with empathy. A former Facebook employee, who spoke with Mic under the condition of anonymity, admitted the algorithms sometimes make mistakes. "Based on Shahana's case, I think that's what happened here."

Without a personal connection, abuse victims are often helpless.

Hanif, like thousands of other users in similar situations, had to keep filling out reports in hopes that one would stick. None worked. Luckily, Hanif had contacts at Facebook, and she reached out personally to explain her situation. Without this help, she doesn't believe her page would have been reactivated. 

"I knew when my account was shut down, if I didn't have the connection to someone working at Facebook or a Facebook security team or people to reach famous people, I wouldn't have gotten my account restored," she said.

The former Facebook employee told Mic they were often asked for help dealing with abuse when the algorithm failed: "I was routinely contacted by friends and friends of friends with various issues they were dealing with."

Once Hanif verified her identity to Facebook by sending them her passport information, they unlocked her account, sending her a message apologizing "for the inconvenience."

Hanif said there was "no recognition" to the fact that her and her sister were being harassed. "Where is that accountability? The inconvenience was that it traumatized me. I couldn't go to work for a day." 

Mic/Gil C/Shutterstock

Developing a better anti-hate machine

"There is clearly room for improvement," the former Facebook employee said. "I think situations like Shahana's show us that regardless of how smart our society's algorithms are, regardless of how many people enjoy Facebook and see it as a valuable service, for some people it's just not a pleasant experience, and how we as a society can make that better should be a constant question for Facebook and journalists and activists."

Impostor accounts on Facebook are a "unique" problem, said the company representative: Unlike on Twitter and Instagram, you can't hide behind a handle or username — you must assume your identity. Because Facebook assumes users use their real names, "the expectation is that you know who you are communicating with. To be candid with you, it's something that's controversial."

Facebook isn't disillusioned by the fact that its system has holes. A spokesperson said Facebook recently completed a series of roundtables with women's safety organizations in four locations around the world to share information about the tools in place. The company aimed to "have a candid conversation about where we are falling short and what we could be doing better."

"We are trying to use automation to enforce our standards," the Facebook spokesperson said. "We'll continue to explore additional ways automation can report or remove content that clearly goes against our policies, but we are still using those systems to assist our review teams."

Is there a future where no humans are involved and handling abusive content online is completely run by algorithms? "There's not ... a way that that would be a possibility," the representative said.

Figuring out better methods and tools to deal with online harassment is certainly not distinctive to Facebook. Medium CEO and Twitter co-founder Ev Williams told BloombergTV's "Bloomberg West" that online harassment "has gotten a lot worse in terms of the vitriol and the level of the severity of the attacks that some people suffer." Twitter has also been accused of poorly curbing harassment on its platform. A recent study said that women are at risk of online harassment becoming "an established norm in our digital society."

Can we tame the trolls?

Hanif still doesn't know why she became a target. "I think men online are dangerous and looking for power," she said. "Impersonation allows them to feel in power, especially after rejection or the silent treatment from women. It's a form of manipulation. This is the scariest form of abuse, not knowing the face or body composition or location of the abuser."

From her personal experience, Hanif has a few thoughts about where Facebook could step up. She wished the company had the authority to track down user IP addresses and get law enforcement or another safety measure to assist. She also recommended Facebook install a team dedicated solely to women who are harassed online — and a team devoted to impostor accounts.

Mic/Sander van der Werf/Shutterstock

Hanif also encouraged Facebook to make it easier for users to apply for verification — like Twitter has done — and she recommended a security strategy that would prevent reported users from creating so many accounts. "Countless" friends of hers have been stalked online or had their pictures stolen to create impersonating profiles, she said, but there was nothing they could do other than report it.

When it comes to online harassment, flawed systems can profoundly hurt users. The bottom line is that user engagement declines along with user safety — something Facebook freely acknowledges. "People will not share on Facebook if they don't feel safe, and that is something we are constantly aware of," the company spokesperson said.