Skip navigation
CLN bookstore
× You have 2 more free articles available this month. Subscribe today.

Forensic Science: Reliable and Valid?

by Jayson Hawkins

The headlines have become too familiar: DNA shows wrong person imprisoned for decades-old crime. 

Over 300 people have been exonerated by DNA evidence, and that number will only continue to rise as more cases are scrutinized. That begs the question of what has led to so many miscarriages of justice. District attorneys, desperate for high conviction rates, constitute part of the problem, but in more than 40 percent of these cases, findings of guilt were based on faulty forensic evidence. 

Forensic science has evolved steadily since law enforcement began using fingerprint analysis a century ago, yet, much like the evolution of our species, there have been missteps along the way. Take, for example, the questionable practice of forensic dentistry. In 1983, a Newport News, Virginia, man was murdered, and his wife was raped and bitten. The suspect was dressed as a sailor, and suspicions fell on the crew of an aircraft carrier docked nearby.

Dentists compared the teeth and dental records of those onboard to the victim’s bite marks. Keith Harward was among those excluded as the source of the marks, but months later, he became a top suspect when his girlfriend complained that he bit her during an argument. Members of the American Board of Forensic Odontology then decided that dental molds of Harward’s teeth actually did match bite marks on the rape victim.

One expert, Dr. Lowell Levine, informed the jury at Harward’s subsequent trial of a “practical impossibility that someone else would have all these characteristics” in their bite. 

Fast-forward to 2015 when the rape kit from the case was finally analyzed for DNA. It revealed the perpetrator had, indeed, been a sailor from the nearby naval vessel—but not Harward. The man it identified had committed several other crimes and died in prison. A year after this revelation, the Virginia Supreme Court exonerated Harward and released him from custody. He had served 33 years. 

Forensic evidence can be a valuable tool for prosecuting crimes when correctly applied. Federal Rule of Evidence 702 allows expert testimony in the courtroom if it “is based on sufficient facts or data” and “is the product of reliable principles and methods ... reliably applied ... to the facts of the case.” This sounds fairly straightforward and often leads to accurate conclusions, though issues surround the word “reliable.” The law intends it to encompass both reliability and validity, but these terms hold separate meanings in statistical science. The difference between them is more than mere semantics; some expert witnesses have manipulated the ambiguity to rob decades from innocent people’s lives. 

A process is referred to as reliable in statistical and scientific terminology if it provides consistent measurements or conclusions when correctly applied. Reliability can be gauged by standard deviation and other methods, which means it is not a yes-or-no proposition. A test that recorded low blood pressure for 19 out of 20 individuals with equivalent pressures would rate 95 percent reliable. Consistency, however, is not the same as accuracy; the test is still considered 95 percent reliable even if all 20 individuals actually had high blood pressure.

A process rates as valid to the extent it measures what it is supposed to measure. In the example above, only one out of 20 were accurately observed as having high blood pressure, which ranks validity of the test at a pitiful 5 percent. Common sense dictates the results of such a test be discarded as useless, but because the language of Federal Rule of Evidence 702 requires only “reliable principles and methods,” an unscrupulous expert could lead a jury to an unfounded conclusion. The U.S. Supreme Court recognized this problem in Daubert v. Merrell Dow Pharmaceuticals, Inc. 509 U.S. 579 (1993). The Court ruled testimony regarding “scientific knowledge” must be both consistent and valid to meet “a standard of evidentiary reliability,” but that has not eliminated wrongful convictions based on faulty forensics. 

The National Academy of Sciences (“NAS”) issued a committee report to Congress in 2009 concerning forensics. It highlighted a significant lack of “scientific studies to determine validity’’ of “the interpretation of forensic evidence.’’ In case the message wasn’t clear enough, the committee called it “a serious problem” and suggested any testimony on identity evidence include measures of validity.

Research lacking

Seven years later, the President’s Council of Advisors on Science and Technology (“PCAST”) reported similar findings. It noted that while the validity of testing for DNA and latent fingerprints had been established, other methods of forensics lacked ample research to be considered proven. 

Both the NAS and PCAST reports concentrated on validity, and the latter specified a difference between applying it to general identity procedures and particular cases. “Foundational validity” refers to the ability of an assessor to determine if two samples—one of which the source is known and the other not—have the same origin. A study of 169 latent fingerprint testers, for example, found the field foundationally valid, as the false-positive rate was only .15 percent and false-negative rate 7.5 percent. Still, PCAST stressed juries needed to be made aware that fingerprint analysis is not infallible; indeed, 85 percent of the testers in the study erred at least once, though none admitted to having ever done so in a real case. 

“Validity as applied,” on the other hand, means that in a specific case the examiner used the method correctly. The North Carolina Court of Appeals recently dismissed a case where a latent fingerprint examiner “failed to demonstrate that she ‘applied the principles and methods to the facts of the case’” when she could not say which parts of the prints she looked at or how long she spent on the comparison. 

The bottom line is that, while forensic evidence is crucial to the investigation of many crimes, mistakes do happen. The Organization of Scientific Area Committees for Forensic Science was established in 2014 to set standards aimed at reducing such errors, but, as the PCAST report warned, much research remains to be done before disciplines like shoe prints, firearms, bloodstains, and other pattern analyses reach acceptable levels of reliability and validity. In the meantime, innocent people will likely continue to go to prison, and those falsely convicted will remain in prison for crimes they didn’t commit. 

---

Source: significancemagazine.com, The Royal Statistical Society

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

CLN Subscribe Now Ad 450x600
Advertise here
CLN Subscribe Now Ad 450x600