Police Are Using Machine-Learning To Flag Officers As Potential Risks

Impact

A police department in North Carolina is using an algorithm to prevent police brutality.  

The Charlotte-Mecklenburg Police Department is developing a machine learning system that will flag police officers who are at risk for adverse events — "such as deadly shootings or instances of racial profiling," the research paper notes. The system — called the Early Intervention System — looks at police officers' characteristics as well as situational and neighborhood factors as predictive aspects for said adverse events. 

The research paper notes that, as it stands, police departments lean on "expert intuition" to determine if a police officer may be a danger to the public, citing "limited resources" as a reason why there isn't a better intervention system. 

This data-driven solution also hopes to prevent police violence by pinpointing an at-risk officer before an "adverse event" even occurs. There have been 583 people shot and killed by police in 2016 at time of writing — based on news reports, public records, social media and additional sources, according to the Washington Post. 

Jim Mone/AP

"We are focusing on identifying officers at risk for having an adverse interaction with our citizens and using that information to provide support to our officers in the way of training, counseling and other types of interventions," Crystal Cody, Computer Technology Solutions Manager at the CMPD said in an email. "The ability to identify these risks makes us a better police agency and in turn makes our citizens and officers safer."

Cody said that they are seeing positive results from the system, finding that machine learning models yield more accurate results in both identifying at-risk officers as well as reducing false positives (the machine learning system had a 32% less false positive rate than the existing system). The last system the CMPD had was a threshold-based system, which isn't predictive or capable of identifying preventable incidents; Cody said it could only look at limited sets of data and didn't offer the department visibility over longer periods of time. Instead, it was designed to flag officers when the number of adverse events exceeded a specific number over a distinct period of time (i.e. three uses of force in 90 days).  

Here's how the new system works: Data is fed into the model, which uses machine-learning to determine risk factors that lead to adverse events. This data includes all of the police department records on an individual, New Scientist reported, such as "details from previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to." Cody said that they anticipate that "supervisors will be able to provide real-time feedback to the model which will factor into future predictions and improve the performance further over time."

The CMPD wants to extend its system to other departments — it will be working with the Los Angeles Sheriff's Department and Knoxville Police Department on it, the paper notes, and will make it open source for others interested in building upon it. The paper also states that the CMPD is in talks with "several other departments across the U.S." regarding the system. 

It is important to note that algorithms are not free from bias — this year, a ProPublica investigation revealed that U.S. courts were using an algorithm riddled with racial bias to hand out sentences. When asked if the algorithm is completely unbiased, Cody said "I would say yes." 

Frank Pasquale, who studies social implications of information technology at the University of Maryland, on the other hand, told New Scientist that this system can include biased data and "can't just be an automatic number cruncher." Once an officer is flagged as a potential risk, they should have the opportunity to redress — the machine doesn't get the last word. But it's here to help.