Facebook's revenge porn policy doesn't technically include victims of the Marines United scandal

Impact

In March, it was discovered that thousands of current and former male Marines were sharing nude photos of women in private Facebook groups without their consent. This nonconsensual distribution of explicit imagery is called revenge porn. But according to Facebook's internal rulebook for moderators, leaked by the Guardian, the Marines United scandal was not in violation of the social network's revenge porn policy.

Here's why: According to Facebook's slide on revenge porn policy, the dissemination of nonconsensual nude or near-nude photos is meant to "shame or embarrass" the subject of harassment. 

But the surreptitious nature of the Marines United Facebook group contradicts this definition, according to Mary Anne Franks, a professor at the University of Miami School of Law and the legislative and tech policy director of the Cyber Civil Rights Initiative. 

"The fact that membership was restricted to male Marines and that the group’s members went to great lengths to try to keep their activity secret demonstrates that they did not want their victims to learn about the conduct, which means in turn that they were not intending to 'shame or embarrass' the women exposed," Franks explained in an email.

J. Scott Applewhite/AP

Franks also pointed out that some of the photos shared in the private Marines Facebook group were secretly obtained — for example, an "upskirt" photo — and weren't necessarily captured in a private setting. According to Facebook's internal guide, an image has to be "produced in a private setting" in order to be considered abusive content.

"So by Facebook’s own definition, it would seem that the Marines United group didn't violate the 'revenge porn' policy," Franks said. "While Facebook did shut the group down after its activities were exposed in the media, its policy indicates that nonconsensual pornography is permissible if the reason for sharing the images is entertainment, profit or an effort to raise one's social status."

"Facebook is in a difficult spot," explained Camille Stewart, an attorney who deals with cybersecurity and online privacy issues. The company is "balancing free speech with privacy rights. To do that in a way that's perceived as fair, they have to outline criteria for analysts and reviewers — who likely aren't legally trained — to make a determination on things that are fact-specific and often a legal determination. The policies have to be specific enough but leave some measure of discretion for the unanticipated scenario. If Facebook doesn't strike the right balance, they run the risk of being sued."

Women are fighting to protect their privacy

An advocacy group for military women sent Facebook a letter after the Marines United Facebook group was revealed. The letter asks COO Sheryl Sandberg to work toward preventing revenge porn on its platform specifically in open or private Facebook groups.

"Facebook has been negligent in removing pages, groups and users, that actively promote non-consensual intimate photo sharing and incite sexual violence and harassment," Not in My Marine Corps co-founder Erin Kirk-Cuomo wrote to Sandberg. "For Facebook leadership to publicize their value to the military family, then ignore its complicity in the misconduct perpetrated by its users, is, at best, naive. At its worst, this failure directly contributes to the inescapable sexism that is part of the military culture."

Not in My Marine Corps

At Facebook, the battle against revenge porn rages on

Facebook has made recent strides in its attempts to prevent the distribution of nonconsensual private photos from the platform. The company announced in April that it would use artificial intelligence and photo-matching recognition to help remove such images. The company also announced that it hired 3,000 more employees for its community operations team, bringing the total to 7,500 human moderators. When Mic asked a Facebook spokesperson whether these were full-time employees or contractors, they declined to share the nature of their employment.

"Keeping people on Facebook safe is the most important thing we do," Monika Bickert, head of global policy management at Facebook, said in an email: 

"We work hard to make Facebook as safe as possible while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously." 

She continued: 

"In addition to investing in more people, we're also building better tools to keep our community safe. We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”

But the slide published by the Guardian detailing the conditions for an image to be considered revenge porn suggests that Facebook's policies are not written to protect all victims.

"The removal policy of any private company truly committed to the eradication of nonconsensual pornography should require only that the person depicted in a sexually explicit photo or video did not consent to the distribution of that material," Franks said.

Carrie A. Goldberg, an attorney specializing in internet abuse and sexual consent, believes the guidelines that determine what warrants removal from the social network shouldn't have been for Facebook's eyes only.

"These internal rulebooks should be public to begin with so that we can manage our expectations about what we can expect to see there and so that we can ditch the company if we don’t like what they allow — or disallow," Goldberg told Mic.

Goldberg also told Motherboard there is a need for greater transparency at Facebook, and as Motherboard noted, Facebook should have willingly shared this information "years ago."

High-tech tools could be Facebook's best bet

Another potential solution: artificial intelligence, said Stewart, the cybersecurity attorney.

"Facebook likely has a tool that pulls in content for review based on predetermined criteria. That data, in addition to reported content, are reviewed by someone who has to make a decision with little context. Their tool is bound to miss relevant/offensive content particularly as slang evolves and as they try to refine the terms and algorithms to minimize false positives.

"My hope is that Facebook has designed this to be a constantly evolving process where the policies and algorithms are refined as new information is discover," Stewart added. "But as we know, there is a lot of data on Facebook, and they cannot review every post every user makes as it goes up."