Skip navigation
Disciplinary Self-Help Litigation Manual - Header

Misuse of Facial Recognition Technology Threatens Everyone

by Michael Dean Thompson

 

Facial recognition technology (FRT) corporations and the policing agencies that use them continue to jeopardize American civil liberties. While their advocates point to a National Institute of Standards and Technology (NIST) study that reported the best systems managed a high degree of accuracy using high quality images, they ignore how when comparing thousands of “probe” images against millions or more database images, the real number of failures (both false matches and missed matches) grows to a very large quantity. Moreover, they fail to mention how the largest provider of FRT to cops, ClearView AI, remains an unproven technology - its failure rate has yet to be tested outside the company.

What we do see, however, is that among the first seven people known to be wrongfully accused, six were Black. This includes Robert Williams who was arrested based on grainy surveillance video, against which they matched an expired driver’s license photo. The very best algorithms for FRT have a significantly greater failure rate for Black and Asian people. They perform their worst with Black women. Add the element of low quality images, and the odds of a correct match spiral downward. Yet, cops continue to make use of the systems to make arrests, often without further investigation. And, because many states do not require cops to identify that FRT was used to identify the suspect, the actual number of false arrests is unknowable and therefore likely much larger.

The Innocence Project’s director of strategic litigation, as well as the author of <Junk Science and the Criminal Justice System>, Chris Fabricant points out, “The technology that was just supposed to be for investigation is now being proffered at trial as direct evidence of guilt. Often without ever having been subject to any kind of scrutiny.” Among the NIST-tested algorithms, the failure rate for those systems not among the very best jumped rapidly even when both the probe and database photos were of high quality. Yet, when a detective tells a jury that AI recognized the suspect from a poor surveillance image based on a Facebook photo match - as might happen with ClearView’s database of images scraped from the internet - the claim seems to bear the force of science. The detective may not mention that the analyst who ran the photo against an untested algorithm manipulated the image to get a match and skipped the first nine matches, preferring instead the subject with a history of driving infractions. “Corporations are making claims about the abilities of these techniques that are only supported by self-funded literature,” Mr. Fabricant said, adding, “Politicians and law enforcement that spend [a lot of money] acquiring them, then are encouraged to tout their efficacy and the use of their technology.”

Facial recognition technology cannot identify a suspect to the same degree of certainty as a single source DNA sample. It should never be used as the primary source of identification. Mitha Nandagopalan of the Innocence Project’s strategic litigation department agrees, “Many of these cases ... are just as susceptible to the same risks and factors that we’ve seen produce wrongful conviction in the past.”

Source: InnocentProject.org

 

 

The Habeas Citebook: Prosecutorial Misconduct Side
PLN Subscribe Now Ad 450x450
Disciplinary Self-Help Litigation Manual - Side