Why is it so difficult for online platforms to tackle disinformation and hate speech?

Impact

The now-too-familiar influx of hoaxes, conspiracies and flat-out false stories that swirled on Facebook, YouTube and Twitter in the days after the Feb. 14 Parkland, Florida, mass shooting are now spurring another round of hand-wringing about the proliferation of fake news, and this time the focus is on the timidity with which web platforms and publishers have approached the issue.

On Wednesday, YouTube, buffeted with weeks of criticism after promoting a conspiracy video that falsely claimed that shooting survivor and Marjory Stoneman Douglas High School student David Hogg was a paid actor, deleted a number of accounts belonging to white supremacist and neo-Nazis. YouTube said the accounts violated a number of its rules against harassment and hate speech.

At least 10 conspiracy theorists, according to the Outline, have had their accounts disabled on the platform, while several others, including the far-right conspiracy site Infowars, have been temporarily suspended or have been slapped with warnings for publishing content that violates YouTube’s rules against hate speech or harassment. The Google-owned video platform also deleted an account belonging to a neo-Nazi extremist group, saying the account was removed “due to multiple or severe violations of YouTube’s policy prohibiting hate speech.”

But in its rush to deal with offending content, YouTube apparently went too far. The company on Wednesday said its moderators had inadvertently deleted some right-wing and pro-gun content that was not in violation of the company’s terms, according to Bloomberg. YouTube’s actions, which came days after concerns were raised, highlighted a larger problem among web platforms: Most are ill-equipped to tackle the problem of disinformation and hate speech.

“The approach that’s being used by the major internet companies is best described as whack-a-mole,” said Dipayan Ghosh, a former privacy and public policy adviser for Facebook and a fellow at the Washington think tank New America. “They are able to catch some egregious content before it goes up, they are able to catch some egregious content after it goes up and before it’s widely shared. But there are some cases where content slips through the cracks or is otherwise undetected because they haven’t coded their systems in the right way.”

Platforms and publishers like Facebook, YouTube and Twitter have struggled mightily to handle the influx of disinformation — false content being spread to mislead or misinform — whether it was coming from teenagers in Macedonia looking to rake in ad revenue, accounts run by Russians in an effort to influence the 2016 presidential election or other bad actors around the world.

On Feb. 7, the online publishing platform Medium, which publishes work from amateur writers to professional journalists, rolled out a set of rules designed explicitly to tackle the scourge of disinformation appearing on its site. The new terms of service bulked up its section on prohibited hate speech and added a section to its terms of service prohibiting accounts from engaging in “on-platform, off-platform or cross-platform campaigns of targeting, harassment, hate speech, violence or disinformation.” Medium said it reserves the right to “consider off-platform actions” when it assesses whether a Medium account has violated the site’s rules.

In a blog post explaining the change, Medium’s trust and safety team nodded to “an increase and evolution of online hate, abuse, harassment, and disinformation” as the reason for the changes.

“To continue to be good citizens of the internet and provide our users with a trusted and safe environment to read, write and share new ideas, we have strengthened our policies around this type of behavior,” the team wrote.

The rules change, according to the Outline, is the likely reason for the subsequent suspensions of Medium accounts belonging to several far-right pundits and conspiracy theorists. In response to a question about which policies the banned accounts violated, a spokesperson for Medium said in an email that the company does not comment on individual accounts. In the wake of the change, the far-right pundit Mike Cernovich, whose account on Medium was suspended, tweeted he would sue for discrimination.

“The steps that Medium is taking to eliminate the scourge of disinformation, hate speech and like content on its platform are tremendous,” Ghosh said. “They are getting in front of an extremely thorny issue in a proactive way, which is extremely positive.”

Medium’s move to explicitly ban disinformation from the platform is a major departure from the approach of the major publishing platforms, which have hesitated to make content decisions that might be perceived as partisan. In a recent interview in response to questions about conspiracy theories about Hogg proliferating on Facebook, a company spokesperson told BuzzFeed News that Facebook doesn’t “have policies in place that require people to tell the truth,” and cautioned that “determining what’s true and false” was something the platform could not reliably do.

The major platforms do have policies in place that prohibit hate speech, harassment and exploitative content, but even with such policies, they have struggled to enforce them. YouTube, for instance, has detailed anti-hate speech and anti-harassment policies in place, but as we saw after the Parkland shooting, struggled to address videos promoting conspiracy theories, or that include hateful content. Facebook gives itself wide berth to determine that such speech should be allowed in certain instances, and has faced criticism for the groups it protects under its hate speech rules and the groups that it does not.

In the wake of the #MeToo movement, for instance, Facebook temporarily banned a number of women from the platform for writing that men were “ugly” or “scum.” Groups like “white men” are protected against hate speech, according to Facebook’s rules, while groups like “female drivers” or “black children” are not protected, according to a ProPublica report that looked at the Facebook’s hate speech guidelines. Facebook has faced scrutiny for how it defines hate speech and for how it enforces its rules.

The size of these platforms presents an immense obstacle to monitoring content and determining if it violates rules, Ghosh said. Facebook has a gargantuan 2 billion monthly active users, YouTube has 1.5 billion monthly active users, and Twitter has 328 million monthly active users, according to TechCrunch. Medium does not share the number of monthly active users it has, but has 60 million monthly unique visitors, a spokesperson said. Facebook and YouTube employ armies of human moderators to review potentially offending content and use algorithms to learn what constitutes rule-violating content. Its these algorithms that are often the first line of defense in flagging posts for human review. Many websites, including Medium, rely on users to flag offending content themselves, which also means potentially offending content must first appear in front of enough eyeballs so someone can report content for takedown.

“Unless [Facebook] were to employ millions of people around the globe working nonstop to review the billions and billions of pieces of content that are shared to their platforms every day,” it’s unlikely and unrealistic for human moderators to handle all of the rules violations,” Ghosh said.

Flaws in the algorithm, however, could mean more than just missing offending content. A Wired article posted Tuesday found that users outraged over the appearance of conspiracies about the Parkland shooting that appeared on Facebook and YouTube actually amplified the conspiracies on the platforms. Algorithms couldn’t tell the difference between users sharing the conspiracies legitimately or those denouncing it. And the Wall Street Journal found that YouTube’s video recommendations sometimes present conspiratorial or hate-speech-filled videos to users “even if those users haven’t shown interest in such content.”

That’s partially a failure of the algorithm, Ghosh said, but it’s also a result of the way social media platforms are designed to work. Platforms, aiming to keep people engaged, feeding content intended to be engaging and relevant. Conspiracies or emotionally charged content are certainly engaging, if nothing else.

So the problem needs to be tackled from all sides. But the solution isn’t simple.

In the immediate future, Ghosh proposed a number of approaches, including improving the algorithms that the platforms use to identify offending content and increasing the number of moderators available to review flagged content and train algorithms. But there are challenges to that approach, as recent slip-ups have indicated — as the Parkland conspiracies have shown, algorithms are not always adept at analyzing context or understanding intent. Moderators often have only a few seconds to make judgment calls about the content, and can make errors.

But the platforms have to start somewhere. YouTube, for instance, said in December that 10,000 people would be moderating content and working on other content-related issues on the platform this year.

Another helpful step, Ghosh said, is transparency from companies about the algorithms in use on the platforms, and how platforms use personal information to target advertisements to certain groups of people. The platforms have largely kept this information under wraps, leaving consumers, legislators and the general public uninformed as to what goes on behind the curtain.

Longer-term solutions will be more challenging than hiring more people or tweaking an algorithm, Ghosh said.

“Transparency and better detection can get us there in the short- or medium-term, but in the longer term, I think disinformation agents can always adapt their approaches,” Ghosh said. “If we really want to solve this problem, we have to think about comprehensive reforms to individual privacy. We also need to start thinking about comprehensive reforms to the way that we oversee strong competition policy and enforce strong competition policy.”

The challenge will likely be one that the platforms grapple with for a long time.

“One on hand, we want the safety and security of every individual ... along with the restoration of our political integrity,” Ghosh said. “On the other hand, we want to maintain a strong commitment to free speech. … So we as a society have to tread a very fine line.”