The concept can be seen throughout society, including one particularly disturbing area: predictive policing. Need to figure out where to deploy police and when? Want to know who is “likely” to commit crimes, or even prevent crimes before they happen?
Throw an algorithm at it.
Proponents say making policing decisions via algorithms can solve crime faster, better, and without human error.
But predictive policing is technochauvinism at its worst.
As Artificial Unintelligence author and NYU professor Meredith Broussard said at Impact Labs’s 2018 Impact Summit, “We thought that [tech was the best solution] for a really long time, but we can look around now at the world we’ve created, and we can say it’s much more nuanced than that.”
So what exactly is predictive policing?
Place-based predictive policing draws from pre-existing crime data to determine which neighborhoods — and times — have high crime. Crimes can be weighed within algorithms to make one seem “worse” than another. For example, an algorithm can rate loitering as worse than jaywalking.
But there’s also person-based predictive policing, which tries to foretell which individuals or groups are more likely to commit a crime.
It dates back as early as 1973, when the Kansas City Police Department used computer data from its ALERT II system to launch Operation Robbery Control to create a map of past robberies and “predict” future ones in the city.
Now, such tools are fairly widespread; per the Brennan Center, they’re mainly used by municipal police departments, but “private vendors and federal agencies play major roles in their implementation.”
Perhaps one of the most infamous predictive policing tools is COMPAS, developed by software company Equivant (formerly Northpointe), which uses an algorithm to create risk scores for recidivism.
In 2016, a ProPublica investigation analyzing Florida COMPAS data found that only “20% of the people predicted [by the algorithm] to commit violent crimes actually went on to do so.”
Despite the fact that 80% of those predictions — which generally rated Black people as more potentially dangerous than white people — were wrong, COMPAS scores impacted people’s sentences.
For example, ProPublica reported that in Wisconsin, Judge Scott Horne noted a defendant had been “identified, through the COMPAS assessment, as an individual who is at high risk to the community.” The judge issued a sentence of eight years and six months in prison.
In 2008, the Los Angeles Police Department began working with federal agencies to explore its options. Several years later, it launched Operation Laser to target individuals and specific areas called “LASER [Los Angeles Strategic Extraction and Restoration] zones.”
The Stop LAPD Spying Coalition obtained a list of those targeted, which showed “nearly half ... are Black (even though Black people are 9% of the city’s population), some were as young as 16, and many are unhoused.”
In 2021, the LAPD ended its predictive policing programs.
Like facial recognition, predictive policing is a form of technology that can never be made “better.” Calls to improve predictive policing algorithms are much like those to continuously reform the police:
Predictive policing simply gives new excuses to cause harm, by using the frame of technology as “neutral.”
Like policing as a whole, predictive policing must go.
Predictive policing is a self-fulfilling prophecy. ... This system is tailor-made to further victimize communities that are already overpoliced — namely, communities of color, unhoused individuals, and immigrants — by using the cloak of scientific legitimacy and the supposed unbiased nature of data.
If you’re still wrestling with the idea of dismantling predictive policing versus fixing it, give this MIT Technology Review article a read.
You can also check out this community-led report by the Stop LAPD Spying Coalition on surveillance and policing in L.A.