Protesters are using facial recognition technology to ID police
For years, law enforcement agencies at the state and federal levels have used often faulty and always creepy facial recognition technology to try to identify civilians. Now the table is being turned. According to a report from the New York Times, activists in cities around the world that have been ground zero for social justice movements are using facial recognition technology to identify police officers.
In Portland, where the fight against police brutality caused months of tension between demonstrators and over-aggressive agents of law enforcement this summer, the use of facial recognition technology became a symbolic fight against police overreach. And while the city's police department allegedly conducted illegal video surveillance of protesters, a local activist was developing a project that would put the cops under the same, scrutinizing eye. Christopher Howell, identified by the Times as a lifelong local activist and self-taught coder, has been building an artificial intelligence system that could be used to identify police officers.
The need for such a program is obvious, beyond the simple satisfaction of putting eyes on Big Brother. Police in Portland, widely criticized and condemned for abusive and aggressive tactics in response to protests, have been hiding their own identities from those they supposedly protect. According to a report from The Oregonian, the city's police chief gave officers permission to cover their name tags on their uniforms. Protesters reported incidents of run-ins with violent police and found themselves with no way to identify the officers who had beaten or tear-gassed them. The issue was made worse when the Trump administration sent federal law enforcement agents into the city, who were often in unmarked vehicles and offered no identification of who they were or what agency they worked for. In one case, federal agents reportedly failed to identify themselves before firing at and ultimately killing an activist accused of being involved in the shooting of a right-wing protester.
If the police wouldn't identify themselves, Howell wanted to give protesters a tool to help them find out who they were dealing with during police interactions. He started building a neural net that could learn how to identify officers based on publicly available images of people who work for the Portland police force. Howell scraped news articles, city websites, and social media — particularly Facebook. He's already gathered thousands of images through his effort and trained his facial recognition technology tool to identify officers. So far, according to the Times, his neural net can recognize about 20 percent of the force. He hasn't made the technology publicly available yet, but is continuing to work on it and has allegedly already successfully identified at least one officer involved in a run-in with one of his friends.
Howell isn't the only activist who has decided to use facial recognition to hold police accountable. In Hong Kong, an activist used photos of police officers scraped from the internet to build a system that could identify them. Once the effort was made public, the activist was arrested and eventually decided to abandon the project. Earlier this month, Paolo Cirio, an Italian conceptual artist, revealed an art exhibit called "Capture" that included more than 4,000 faces of French police officers. Cirio described the project as the first step in developing a facial recognition application that could be used to identify an officer. Cirio's project was ultimately shut down when Gérald Darmanin, France's interior minister, threatened to take legal action against the artist.
In all of these cases, most of the images of police were sourced from the internet, scraped from publicly available websites and social media. That's the same way that Clearview AI, a company that claims to have contracts with more than 2,400 police agencies, has gathered much of its database of more than three billion faces. Police have largely opposed bans on facial recognition software, claiming it hampers their ability to do their jobs. They might make a similar claim about being identified by the same technology that they use on citizens. If they do, it might be time to take a long, hard look in the mirror rather than the monitor.