Facebook's Safety Check needs to check itself — here are the feature's 4 big problems

Impact

Around 6 p.m. Wednesday, a concerned friend in Texas called me. "Are you OK?" she said. "I got notified that there was some train accident in Brooklyn. Or was it a glitch?"

She'd received an alert after a Facebook friend marked herself "safe" in an event called "The Train Accident in Brooklyn, New York." When my Texas friend navigated to the Safety Check page, it showed a list of her friends "not marked safe yet" — including me. 

Facebook's Safety Check feature first debuted during the Nepal earthquake in April 2015. "When disasters happen, people need to know their loved ones are safe," Facebook CEO Mark Zuckerberg wrote when the feature was activated. "It's moments like this that being able to connect really matters." 

Deployed sparingly, Safety Check felt like a hub for solidarity and reassurance amid devastating crises. But now that Facebook has handed over the reins to users, the tool strikes me as yet another ineffective and disingenuous method by which Facebook manipulates its audience. 

Here are four major reasons why.

1. It gamifies safety

The first problem lies in how Facebook keeps users coming back to Safety Check — with the fear their friends and family might be injured or dead. Facebook has made a manipulative game out of guessing who may or may not be safe. Click to find out!

It jolts you with the announcement of a dangerous but vague incident, like this week's Long Island Rail Road train derailment

First, you receive notifications — this one came in about 11 hours after the actual event — saying your friends have marked themselves safe.

Mic/Facebook

When you click through, it teases the idea that some friends — but not all — are OK.

Mic/Facebook

Then, your loved ones might be alarmed to see your face underneath a section called "not marked safe yet." Did Melanie survive?

Mic/Facebook

Facebook also prompts you to use a Facebook service to see whether your friend is safe.

Mic/Facebook

It's a labyrinthine combination of FOMO and fear mongering — and in events that produce no serious injuries, it creates unnecessary panic.

Which brings us to the second problem:

2. It lowers the bar for dangerous accidents

Mic/The Washington Post

In November, Facebook changed its Safety Check feature to be automatically triggered by posts from the community. "A lot of people in the area" need to be talking about the incident, and Facebook's "trusted third party" must confirm an incident occurred. If both of those boxes are checked, the Safety Check feature will automatically activate.

Facebook no longer has to judge which events warrant a Safety Check. That user control can be helpful for highlighting crises in areas lacking mainstream media coverage, but Facebook's hands-off approach to the service can lead to a frustrating lack of clarity and vital information.

The process looks something like this: People are talking about it? The third-party source confirms something in the area happened? Great, now send out a vague push notification to millions of people, saying someone they love might be unsafe. Is it a deadly hurricane? A mass shooting? Someone throwing firecrackers off a roof? A train crash in which one woman broke a leg? That's for you to find out.

3. Notification delays keep users worried

I wasn't immediately notified to mark myself safe following the LIRR crash that resulted in only minor injuries. I ultimately decided not to mark myself as "safe," since that might signal to loved ones on Facebook that I might've been in danger to begin with.

But for the billion Facebook users around the world, as long as they have a single Facebook friend located near a community-flagged "incident," they'll likely get a notification claiming someone might be unsafe.

Sometimes, however, there's no notification at all. After an incident, I ought to get an immediate prompt to say "Yes, I'm fine," rather than wait for a friend to ask after being triggered by a notification that I might not be safe. It creates a circular and largely unnecessary process of concern — and naturally, it results in more user engagement on Facebook.

4. Facebook refuses to moderate the spread of misinformation

Esteban Felix/AP

The most frustrating aspect of all this? Facebook's fear of being held responsible for editorial decisions is breeding fear among users. 

Facebook's Safety Check started as an invaluable tool in the wake of actual fatal tragedies and disasters, including the Paris attacks and the earthquake in central Italy. But since it changed the system to be predominantly controlled by users and with the aid of an algorithm, Safety Check has become a tool that cried wolf. The consequence is potentially dangerous misinformation.

Following the Berlin truck crash in December, Facebook quietly changed the wording of the Safety Check feature to "violent incident" after preemptively calling it an attack before the authorities did. This signaled that while Facebook leadership has refused to acknowledge its influence as a media company, someone at Facebook recognized its power.

Then, about a week after that, Facebook triggered its Safety Check feature for an event titled the Explosion in Bangkok. It pointed to a Bangkok Informer article that linked to the 2015 Erawan Shrine bombing. There was an explosion in Thailand that activated the feature, Facebook confirmed: A protester had thrown firecrackers into the Government House. No one was injured.

Facebook needs to check itself before users stop taking Safety Check seriously. It should retake control of the product before it signals to millions more people their loved ones might've been injured or killed in a harmless incident.

But Facebook has shown a stubborn reluctance to moderate the information on its platform. With Safety Check — just like the fake-news disaster — Facebook's not responsible for what you read. It's just the messenger.