U.S. Courts Are Using Algorithms Riddled With Racism to Hand Out Sentences

Impact

For years, the criminal justice community has been worried. Courts across the country are assigning bond amounts sentencing the accused based on algorithms, and both lawyers and data scientists warn that these algorithms could be poisoned by the prejudices these systems were designed to escape.

Until now, that concern was pure speculation. Now, we know the truth.

An investigation published Monday morning by ProPublica analyzed the results of thousands of sentences handed out by algorithms, and found that these formulas are easier on white defendants, even when race is isolated as a factor.

"The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants," the investigative team wrote.

Brendan Smialowski/Getty Images

The algorithms don't take race directly into account, but instead use data that stands in for correlative information that could stand in as a proxy. The Florida algorithm evaluated in the report is based on 137 questions, such as "Was one of your parents ever sent to jail or prison?" and "How many of your friends/acquaintances are taking drugs illegally?" 

Those two questions, for example, can appear to evaluate someone's empirical risk of criminality, but instead, they target those already living under institutionalized poverty and over-policing. Predominantly, those people are people of color.

"[Punishment] profiling sends the toxic message that the state considers certain groups of people dangerous based on their identity," University of Michigan law professor Sonja Starr wrote in the New York Times in 2014. "It also confirms the widespread impression that the criminal justice system is rigged against the poor."

The algorithm itself, of course, was not available for audit. Algorithms that inform decisions in the public sector are often developed and protected by private companies — Northpointe, a for-profit company that created the algorithm examined by ProPublica, told ProPublica that it does not agree with the results of the analysis. It "accurately reflect the outcomes" of its product, Northpointe said. 

ProPublica

But the controversy over sentencing is just one early instance of a growing conversation about bias in the algorithms that decide everything from what news we see to how and where we travel.

It's time to talk about algorithms: Algorithms seem impervious from the insidious influence of racism and prejudice, human innovations that can unconsciously creep into our fallible decision-making processes. Evaluations that come from algorithms imply that the results are scientific — spat out by a cold computer working only with evidence. The process of sentencing by algorithms is even formally referred to as "evidence-based sentencing."

"Scores give us simplistic ways of thinking that are very hard to resist," Cathy O'Neil, a data scientist and author of the upcoming book Weapons of Math Destruction, said by phone. "If you assign people scores and someone has a low score, it's human nature to assign blame to that person, even if that score just means they were born in a poor neighborhood."

AFP/Getty Images

But just because algorithms are mathematical in nature, doesn't mean they're free from human bias. Algorithms spot and amplify patterns in human behavior, and they do it by looking at the data created by human behavior. Predictive policing algorithms that help police chiefs assign their patrols rely on crime statistics and records generated by police behavior, eventually amplifying the prejudicial behaviors that led to that data in the first place.

As more news emerges of bias in algorithms — whether it's the potential anti-conservative bias of Facebook's news algorithm or pricing schemes that charge Asian communities more for SAT tutoring — the world is further disavowed of the idea that algorithms can't be as skewed as human reasoning.

Often, they are skewed in precisely the same way we are.