Police Are Sweeping Up Tweets and Friending You on Facebook, Whether You Know It or Not
In 2010, Jeffrey Lane noticed a shift in police behavior in Central Harlem. There was a group of local kids that hung out on the corners of 129th Street and Lenox Avenue, sandwiched between their rivals at Saint Nicholas housing projects to the west and Lincoln housing to the east. Lane was doing community outreach with a pastor who was jumping on social media to keep in touch with the kids.
In other parts of northern Manhattan, an invite to the cookout was posted online, and squad cars arrived in the neighborhood just in time.
As the streets of East Harlem had moved to the social Web, the police donned digital masks and followed. They tapped into Twitter and Facebook, posing as members of the community. The day of the cookout, officers picked up the neighborhood kids for questioning. They asked them about their posts to Facebook. They asked who their friends were.
Over the next five years, text and photos pulled from social media would become a regular tool for predicting and prosecuting gang violence and violent crime in New York City and across the country.
Next up:
But building relationships, setting up fake profiles, monitoring event invites — all this is relatively analog work.
In 2015, on the front lines of predictive policing, officers are now using an algorithm to scrub up the world's social media posts, map them out and use them to predict where crime is most likely to occur. And a new class of tech startups aims to take the wide world of public social media posts and turn them into a living map that will direct police toward crime before it even happens.
Pre-crime maps help police predict crime as if it were the weather. Every day, police precincts in major metropolitan cities generate maps that show them where certain types of crime are likely occur in various "hot spots" around the city.
Some of these mapping systems rely on historical crime data and won't go anywhere near social media. Larry Samuels, CEO of popular predictive crime mapping tool PredPol, told Mic that using social media is a line he's unwilling to cross. It's a veritable Pandora's box, he said, introducing chaotic variables that haven't been empirically proven enough to justify using it.
"Philosophically, the people who run this company have very strong beliefs about not crossing that line, and about constitutionality — being what we are and nothing more," Samuels told Mic.
But newer systems like Hitachi's Visualization Predictive Crime Analytics include social media data like public tweets. Hitachi's system takes a look at geotagged tweets to find terms in common and link them to increased crime.
So if there's known gang activity in an area, the algorithm sweeps through public tweets to find terms, vernacular, and "off-topic" words to give more weight to a hotspot. But using a pattern of speech as a pretext for enhanced policing is a algorithmic way to group people by common characteristics, casting guilt by style of speech or even association.
"You have public functions being conducted by private companies without public oversight or transparency, with vast implications for constitutional rights."
Rachel Levinson-Waldman, senior counsel for the Brennan Center for Justice public policy institute, told Mic, "Even if it is all 'gang-related,' what does it mean to feed in information that says, 'We know that Person A is a member of a gang, and they exchange tweets with person B?' Are we going to put Person B under surveillance or keep an eye on them?" To her, the question is, "To what extent is there oversight of how that circle of surveillance grows?"
Using social media to decide who is or isn't a criminal opens up the door for prejudicial police profiling.
David Robinson is a partner at Upturn, a firm that helps the civil rights community with policy issues around technology. "When we think about boys of color using social media, under such intensive surveillance and subject to such disparaging assumptions about them, it's a recipe for prejudicial results," Robinson told Mic. "On the Internet, insular cultures emerge. Someone could use slang or make a joke, and there's such an opportunity for signals to be misinterpreted."
And including social media gives predictive-policing systems a twinge of unconstitutionality. The Fourth Amendment prevents policing without reasonable suspicion, and the 14th Amendment prevents profiling of individuals based on group characteristics. Constitutional restrictions only apply to the state, and if it were police developing these surveillance technologies, there would be a potential infringement on constitutional rights.
But the outsourcing of surveillance to private corporations diverts responsibility away from police departments and onto small companies that can't be held accountable to constitutional prohibitions.
"You have public functions being conducted by private companies without public oversight or transparency, with vast implications for constitutional rights," Shahid Buttar, a constitutional lawyer and director of grassroots efficacy for the Electronic Frontier Foundation, told Mic. "That's a loss for transparency, and it's a loss for people impacted by models that paint them with suspicion based on group characteristics."
That's if social media data is even valuable. Including tweets in crime mapping might prove more distraction than innovation.
Matthew Williams is a researcher of computational criminology at Cardiff University in Wales. He calls the methods used by new crime-mapping systems "unsupervised": The more connections they make between random phrases and criminal events, the more convoluted and useless the systems become, he says.
There's been little to no peer research that reliably shows that Twitter data enhances predictive policing methods.
"When you add more and more data into these models, you'll have correlation in the data set where none actually exist," Williams told Mic. "Or you get correlations between sets of words that mean very little. So you may see strong correlation between certain key terms in tweets and criminal damage, but they'll seem completely random to an analyst."
Even with marginal increases in prediction, the result won't mean anything to police on the ground, other than pointing to a map and saying, "Go here to find some crime." Williams says that in order to make use of social media, you have to apply conventional policy wisdom — criminological theories that have been tried and tested — and see if social media can suss out patterns that cops already understand.
Take "broken windows" policing, a theory that simply says that the physical degradation of a neighborhood, whether it's vandalism or urban decay, is correlative with high crime. Williams claims that his team at Cardiff is working with a successful model that carefully analyses tweets for specific indicators of a beat-up neighborhood, using people as unwitting sensors of real-world degradation.
"Give a cop a bunch of keyboards that just correlate to an area, and he won't make sense of it," Williams told Mic. "But give him a theoretical construct like collective advocacy or broken windows, and he knows what those are."
Through the course of his research, Williams found a series of factors that will inevitably bias certain neighborhoods over others when mapping tweets. This is an excerpt of his forthcoming research, not yet published:
This bias can have four elements when it comes to detecting signs of neighborhood decline or crime via Twitter in particular: i) varying perceptions of local signs of degeneration; ii) knowledge of Twitter; iii) tendency to use Twitter; and iv) tendency to broadcast such issues on Twitter.
In other words: Not every neighborhood or population is evenly inclined to use Twitter. Furthermore, you won't see people reporting things they take for granted and wouldn't think to share on social media, like vandalism in areas where there has always been vandalism.
Otherwise, there's simply been little to no peer research that reliably shows that Twitter data enhances predictive policing methods. A study released last April by the University of Virginia collected public tweets and layered them into traditional geographical models that use a technique called kernel density estimation, showing a marginal increase in predicting crimes like stalking and theft, but a decrease when applied to crimes like arson and kidnapping.
Meanwhile, police are secretly combing through social media for their detective work. Old-fashioned policing deals with small data — insights gleaned from patrolling the streets, getting to know neighborhoods, using your senses. Modern police officers use dummy accounts (or not) to check in on local events and community relations on Facebook and Twitter the same way a beat cop of 20 years keeps an eye out for strange faces along his patrol or checks in with the local businesses.
"All of the relationships and aspects of street life are networked online as well," Lane told Mic. "You couldn't really be a cop just by being on the street, because you wouldn't be seeing street life. Whether you're a teenager or a pastor or a cop, you have to be connected to the digital side, because the street is online now."
It would be unreasonable to expect otherwise. Social media is now routinely entered into evidence, and chasing down social media accounts and mapping associations can lead to enormous arrests, raids and drug busts.
But Lane says that in the communities where he's done outreach, the police are outpacing the other parties and interests who could be using social media insights and predictive police tools to make a positive impact on young people, like youth outreach, clergy and social services — police become the first responders to trouble when it surfaces.
"We have policy that says police can follow kids, but public school teachers can't," Lane told Mic. "Now, there are good reasons for that, but I don't think we want police to be that far out ahead of concerned adults who can be involved in the lives of teenagers, but who don't lock people up."
A makeshift methodology for what officers are and aren't allowed to do is popping up as detectives hop on Facebook under fake aliases. Freedom of Information Act Requests reveal that in New York City, there is a robust internal policy for setting up fake profiles for catfishing young people into providing actionable intelligence.
The NYPD denied Mic's requests for comment.
No standards, few precedents, little transparency: Social media policing is so young, most precincts are improvising their standards and practices. Of police officers who use social media as part of their regular investigations, a vast and growing majority of those officers are self-trained, according to a study last year from LexisNexis (which ironically acquired its own predictive policing startup in January).
And even if layering in social media data to mapping systems is ineffective and open to bias, the eventual ambition of marrying sweeping algorithmic dragnets with facial databases, biometric records and crime history data brings policing toward the eventual ambition of digital surveillance: that predictive crime can move past neighborhoods and start targeting individuals.
"We're less excited by geographic prediction," Deputy Chief Jonathan Lewin, chief technology officer of the Chicago Police Department, told Mic. The next big thing is "subject-based prediction. Merging those two things is one of the most exciting things in police technology."
The question of whether or not this data should be used, or even can be, brings us back to the same question social media has always posed, and that we in 21st-century society have barely resolved. How seriously are we supposed to take our public expressions — nuanced, riddled with memes and jargon, meant for a makeshift community, made in semi-public spaces and left alive long past their shelf date — and should we be held accountable for them to officers of the law?
"It's not human nature to cautiously cross-check what you make public with some sort of law enforcement frame of mind," Upturn's Robinson told Mic. "So when you apply that to a heavily policed community, where we're talking not just about adults, but adolescents who are still finding their way, it's particularly concerning."