Technology and Police Reform
by Anthony Accurso
Technology innovation seems to impact every aspect of our lives in the modern era, but what roles should technology play in policing? As the national conversation has turned to police reform, technology’s roles are being questioned anew.
Three technology trends are behind many of our most recent innovations: cheap data storage and databases, artificial intelligence, and near ubiquitous video and audio recording devices. This is equally true in tech recently adopted for use by law policing agencies. Cheap, high-definition cameras are mounted on Tasers, vehicle dashboards, drones, buildings, and officers’ bodies. That video is stored, seemingly indefinitely, in cloud databases. The video is combed through by AI algorithms to create new data points used by other AIs to make, or aid in making, decisions in a policing context.
But, like so many other areas of our lives affected by innovation, we never stopped to ask what purposes these tools serve, and whether those purposes are at odds with our other, closely held values like privacy or free speech.
Nine years ago, Santa Cruz, California, was one of the first police departments to adopt software that implemented “predictive policing.” The thought was that they could feed enough data about past crimes into a database, and an AI would tell them the most efficient way to allocate officers to prevent crime.
But this past June, Santa Cruz became the first city to ban predictive policing. It turned out that “predictive policing” magnified aggressive policing in minority communities and didn’t contribute to public safety. This was likely due in part to the fact that the information fed into the database reflected our nation’s history of racially motivated policing and oppressive laws, which targeted minorities.
Facial recognition AI algorithms have followed a similar trajectory. Being able to identify a person captured on video committing a crime sounds like a good idea. But what about citizens who are merely peacefully protesting? What can, or should, police be able to do with that video? And where do software makers get the photos for comparison? Mugshot records? The state’s driver’s license database? Social media websites? When does this activity cross the line into violations of privacy?
It turns out that these algorithms also are biased against minorities and women. MIT and Stanford conducted joint research that concluded in 2018 that these algorithms misidentify darker-skinned women 34.5% of the time, while light-skinned men were misidentified a mere 0.8%. Ostensibly because of these racial disparities, Amazon, Microsoft, and IBM have suspended their facial recognition software services, though other players in the market continue to provide such services to police agencies.
Body and dashboard cameras have been adopted by police departments when communities have demanded more accountability from their officers. Yet there is no accountability when officers can disable recording when they are about to misbehave or when departments can withhold, sometimes indefinitely, video of incidents where police misuse force. And when every interaction with police is filmed, does this intrude on the privacy of citizens being policed, which is often disproportionately minority communities? What do police, or the corporations providing the services, do with all that video?
These are questions that must be asked more often and more loudly. Technology is morally neutral: the same tech behind cheap energy also fuels nuclear weapons. How we allow police to use technology must be considered when we push for police reform.
“It’s not about whether or not police use tech; it’s whether or not we can make the footprint of police smaller year after year,” said Hannah Sassaman, policy director at the Movement Alliance Project. “You don’t need an app for that.”
Source: abajournal.com
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login