Problems With Predictive Policing
Without the luxury of pre-cognitive abilities, modern police agencies have come to rely on a suite of surveillance and data-crunching techniques called “predictive policing.” Predictive policing is a process whereby algorithms attempt to predict instances of crime, as well as victims and offenders, based on previous data.
While in theory this process could possibly enhance public safety, in practice it creates or worsens far more problems than it solves. Critics of predictive policing assert that problems with bad data, institutional biases in law enforcement, and a lack of transparency and public input undermine any effectiveness this new technique might bring to the table.
This criticism is not based on an instinctive distrust of law enforcement. AI Now Institute at NYU has studied predictive policing in 13 U.S. police jurisdictions that had recently been cited for illegal, corrupt, or biased practices. The study found that in nine of these jurisdictions, the predictive policing system produced outcomes that reflected the problems previously cited in that jurisdiction.
The most remarkable example was the Chicago Police Department (“CPD”), which has a long and well-documented history of corruption and bias that disproportionally affects people of color. These practices generated what AI Now called “dirty data,” that is, biased information fed into the system that in turn generated biased outcomes. In other words, instead of providing tools for more balanced and effective policing, the CPD system merely amplified existing prejudices and disguised them as neutral, data-driven intelligence.
Bad data and institutionalized biases were only two of the problems identified by AI Now. The third problem is the secrecy that customarily surrounds law enforcement practices and the development of proprietary, algorithm-based software. These twin barriers prevent public scrutiny of data or how it is used. The public is only aware of outcomes, and these only become public at the discretion of police agencies, if at all. Without oversight or public input, the incentive for critical assessment and reform is non-existent.
Though institutional scrutiny of predictive policing in the U.S. is conspicuously absent from public discourse, the European Parliament held hearings on the issue. Andrea Nill Sánchez, executive director of AI Now, delivered unambiguously critical testimony about current practices in the U.S. He warned that “left unchecked, the proliferation of predictive policing risks relocating and amplifying patterns of corrupt, illegal, and unethical conduct linked to the legacies of discrimination that plague law enforcement agencies around the globe.”
Sanchez recommended minimal assessments to help balance implementation of predictive policing. First, agencies must self-assess the system’s potential impact on fairness, justice, and bias. Second, there must be a meaningful external review process. Third, public notice and comment should be part of the ongoing process. Finally, enhanced due-process protections should be put in place to allow effective challenges to unfair, biased, or other harmful effects, especially regarding racial inequality.
The spectacular failure and corruption of the Pre-Crime Unit in Minority Report led to it being dismantled, and even though it is unlikely law enforcement will abandon predictive policing, public pressure to ensure it is fair and transparent can help mitigate the damage.
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login