Subscribe to Mic Daily
We’ll send you a rundown of the top five stories every day
Facebook to start using algorithms to assign users a “trustworthiness score”
Facebook will assign a score to represent how trustworthy users are. Book Catalog/Flickr Creative Commons

Facebook will soon determine if you can be trusted.

According to the Washington Post, users will be rated on a scale from zero to one, though it’s unknown whether every single user will be assigned a score. The report states the trustworthy rating will be one of many metrics that Facebook is using to improve the site. The social network will also look at which news outlets users consider to be trustworthy as well as who tends to flag other people’s content as problematic. The move isn’t too far off from an episode of Black Mirror, in which everyone is assigned a score that determines their quality of life.

Since the first reports of fake news on the social network, Facebook has taken many steps to combat false information on its site. The social network has been accused of letting bad actors use the platform to influence elections. Facebook is taking steps to reduce the problem in other countries like Brazil, India and the E.U. as well.

With its new rating system, Facebook faces a dilemma. Even if the company wants to be up front about how its trustworthy scale works, it can’t — those with malicious intent could exploit the system.

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” Claire Wardle, director of the Harvard Kennedy School’s research lab First Draft, told the Washington Post. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.” Other Facebook guidelines, like the number of strikes an account has before they can be fully banned from the site, are similarly kept a secret for the same reason.

Facebook’s use of algorithms to determine trustworthiness could also be a point of contention for the company. Artificial intelligence is known to be biased by those who train it. In the past, we’ve seen crime-predicting AI that was biased against black people and other people of color, as well as camera AI that presumed Asian people were always blinking. It isn’t just race: Google’s image search previously surfaced pictures of almost only men when the search term “CEO” was entered. The first woman to appear in a Google search for CEO?: CEO Barbie.

With few details available about the rating system, it’s unclear what steps Facebook is taking to limit bias from seeping into its rating software. We’ve reached out to Facebook for comment on how the site plans to limit bias in its rating system.