AI-Generated Police Reports Must Have Guardrails for Inaccuracy, Bias, Transparency, and Review
by Jo Ellen Nott
Axon announced the launch of Draft One, a technology that the company calls its newest public safety product, on April 23, 2024. This AI system generates police reports from the audio of body-worn cameras and has raised concerns that the technology could worsen existing issues in police-community interactions, especially for marginalized groups long known to bear the brunt of police misconduct.
A major concern about Draft One is its potential for inaccuracy and bias. Experts worry that the software will not be able to accurately interpret complex situations. Body camera footage can be chaotic, and dialects or slang are easy to misinterpret. When an officer shouts “the suspect has a gun!” how will the AI translate that fact into its report? Will the report state the officer saw a weapon or simply report that the officer made the statement? Nuances like these can have serious legal consequences for a suspect.
Relying on AI to generate police reports raises issues of human accountability. If an officer exaggerates details captured on camera, Draft One might amplify these inaccuracies. Officers could then claim the AI misrepresented the scene, creating a “plausible deniability” shield.
Other concerns about the Draft One technology revolve around review and transparency. It is not known whether police officers will review the AI reports for accuracy or simply sign off on them to save time and effort. Negative precedent with facial recognition technology already exists. Officers have wrongly arrested people based on a mistaken facial match without any follow-up investigation. This sloppy use of the technology happens even though companies that sell facial recognition software insist that the matches should only be used as a lead and not a positive identification.
Another concern experts have is with external oversight. Police departments nationwide struggle with transparency and public records requests because of their culture of intense secrecy. Critics of AI generated police reports ask “how can the public, or any external agency, be able to independently verify audits of these AI-assisted reports?” Even more importantly, will external reviewers be able to differentiate between AI-written parts of the report versus the parts written by a human?
In its press kit to accompany the launch of Draft One, Axon outlines how the technology has built-in safeguards. Axon claims its software “1) requires officer review and approval of all drafts, 2) generates reports solely on body-worn camera audio, 3) includes prompts for officers to add additional details and does not allow them to move on unless providing the details, and 4) limits functionality to minor (or misdemeanor) incidents in the beginning phase of using Draft One with options for expansion.” However, as departments and their officers become more comfortable with using the software, leadership can choose to change the default settings to allow officers to generate reports about felonies.
Although police reports have flaws, they are necessary records of police encounters with the public. Their importance lies in holding officers accountable and giving the public some access to an incident after the fact. Axon’s Draft One raises serious concerns about AI’s potential to create inaccurate police narratives. Human oversight and transparent review processes will be critical to the fair use of AI-written police reports.
Sources: Electronic Frontier Foundation, PR Newswire
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login