The Myth of Certainty: A Critique of Modern Forensic Science

Written By Leo Piazza: L’27

After enduring nearly 15 years on death row inside the Mississippi State Penitentiary–condemned for a crime he did not commit–Kennedy Brewer finally walked free, an innocent man exonerated at last.[1] The cheers and warm embrace of his family were bittersweet–because while Mr. Brewer was finally free, his plight exposed something more insidious: the deep flaws and unreliability of bitemark analysis that had helped put him on death row.[2]

To understand how Mr. Brewer’s wrongful conviction came to be, it’s important to look at the case that changed his life. On May 3, 1992, Christine Johnson, the three-year-old daughter of Brewer’s girlfriend, was kidnapped, raped, murdered, and dumped into a creek 500 yards from her home.[3] Brewer was promptly arrested, and his trial began three years later in March of 1995.[4] During his trial, Dr. Michael West, an “expert” in forensic odontology, testified for the prosecution, concluding that 19 marks found on the victim’s body were “without a doubt” made by Mr. Brewer’s teeth.[5] Despite Dr. West being suspended by the American Board of Forensic Odontology and concluding with a level of scientific certainty unsupported by the literature, he was admitted as an expert witness, and his testimony was allowed.[6] Mr. Brewer was subsequently convicted of capital murder and sexual battery and was sentenced to death.[7] Throughout his time in prison, he consistently maintained his innocence, and in 2001, DNA testing of semen collected from the victim revealed that his genetic profile did not match.[8] Prosecutors initially intended to retry his case, resulting in him remaining in prison for an additional five years, but they abandoned this effort in 2007.[9] Finally, in 2008, 15 years after being sentenced to death, Mr. Brewer walked free. 

His case represents just one among thousands of exonerations in which forensic evidence was misused, abused, or misinterpreted.[10] Moreover, it highlights a recurring problem in the legal system: jurors’ inordinate trust in forensic experts and the court’s frequent failure to properly gatekeep against unreliable scientific evidence. 

The CSI-Effect 

One possible explanation for why jurors unduly trust forensic experts lies in what researchers have termed the “CSI effect” – the idea that popular media portrayals of forensic science have shaped jurors’ expectations about the reliability and importance of scientific evidence in criminal trials.[11] As a result, jurors may assume expert witness testimony is automatically credible and reliable. For example, a study conducted by Kohler et. al. concluded that jurors’ focus on an expert’s reputation rather than the scientific rigor of the field or the reliability of the method employed.[12] However, research by Thompson et. al. suggests that jurors can, in some circumstances, properly weigh forensic evidence based on the degree of scientific certainty given–e.g., jurors were more likely to convict when the DNA evidence indicated high probabilities, and less likely when it was lower.[13] Yet, Thompson also found that when the forensic method was uncertain, jurors overestimated its probative value, giving it more weight than they otherwise should have.[14] 

Judges, therefore, find themselves in the midst of this environment, where jurors’ perceptions may be subtly shaped by popular media and where science itself continues to evolve and correct. 

The Origins of Modern Forensic Science

The earliest form of modern forensic science can be traced back to 1835 with Henry Goddard, a member of the Bow Street Runners, one of London’s earliest organized police units.[15] He was called to investigate the murder of Mrs. Maxwell, who was shot dead in her home.[16] Her butler, Joseph Randall, told police a gunfight had occurred with burglars.[17] After analyzing the scene, Goddard observed that each of the bullets recovered had a distinctive pimple indent, including the bullet that killed Mrs. Maxwell.[18] Upon investigating Randall’s rifle, Goddard discovered a pinhead-sized hole in the molding of the gun, pointing to Randall as the killer.[19] 

In the decades that followed, other methods for identifying criminals began to emerge. In the 1880s, Francis Galton began conducting studies on fingerprints. Initially, Galton, a prominent eugenicist, studied fingerprints in hopes of linking them to an individual’s racial background, intelligence, or genetic history–although this inquiry quickly came to a dead end.[20] However, he did successfully confirm an earlier theory that fingerprints do not change over a person’s lifetime, and that an individual’s fingerprints are completely unique.[21] After reviewing thousands of different fingerprints, Galton created a method for classifying them into distinct pattern types–loops, whorls, and loops–forming the basis of today’s fingerprint classification systems.[22] 

The key takeaway is that forensic science originated from an idea best captured by Lochard’s exchange principle: every contact leaves a trace.[23] If this principle holds, then nearly every aspect of a crime could, in theory, be measured and analyzed. This suggests that there can be certainty and that science can provide definitive proof. However, as this paper will demonstrate, the reality is more complicated than this antiquated assumption. 

The Judge’s Role – Frye & Daubert

Not long after these forensic methods were established, courts began allowing expert testimony on the subjects, with ballistic evidence first accepted in Virginia in 1879[24] and fingerprint analysis in 1911.[25] This highlighted the need for judges to establish clear standards for determining when expert testimony could be admitted into court, a challenge that was addressed in 1923 in the Frye decision. 

James Frye was charged with murder and, during his trial, attempted to introduce expert testimony based on a systolic pressure deception test– a precursor to the modern lie detector test.[26] The trial court refused to admit the expert’s testimony–and the Court of Appeals affirmed–holding that the practice “must be sufficiently established to have general acceptance in the particular field in which it belongs.”[27] For nearly 70 years, this standard served as the prevailing test for the admissibility of expert testimony in Federal Courts. Although the Supreme Court replaced it in 1993 with the Daubert decision; however, many states continue to apply Frye today.[28] 

The Daubert case arose when Jason Daubert and Eric Schuller (petitioner) alleged their serious birth defects were caused by their mother’s ingestion of Bendectin, an anti-nausea drug manufactured and marketed by Merrell Dow Pharmaceuticals (respondent).[29] The respondent moved for summary judgment, arguing that Bendectin does not cause birth defects. The District Court granted summary judgment for the respondent, applying the “general acceptance” standard, and finding the petitioner’s evidence inadmissible.[30] The Ninth Circuit affirmed, explicitly citing Frye.[31] The U.S. Supreme Court granted certiorari to determine whether the Frye test had been superseded by the Federal Rules of Evidence (“FRE”), enacted in 1975.[32]

The Supreme Court held that Rule 702 of the FRE displaced the “rigid” Frye standard, noting that Rule 702 contains no explicit reference to “general acceptance,” and that adhering to such a test would be at odds with the flexible and liberal intent of the rule.[33] Nevertheless, the court emphasized that judges still have a responsibility to ensure expert testimony is both relevant and reliable.[34] To that end, the court composed a non-dispositive list of factors that can indicate reliability: (1) whether the method has been tested,[35] (2) whether it has been subject to peer review and publication,[36] (3) whether it has a known or potential error rate,[37] and (4) whether it has gained widespread acceptance within the relevant scientific community.[38]

Under this framework, judges serve as “gatekeepers,” tasked with determining the admissibility of expert testimony based on reliable methods rather than scientific consensus. In 2000, Rule 702 was amended to incorporate the Daubert principles and provide further safeguards.[39] However, research since then has shown courts consistently misapplying both Daubert and Rule 702,[40] allowing demonstrably unreliable science to enter the courtroom.    

Questionable Forensic Techniques – Bitemarks, Hair Microscopy, & Bloodstain Analysis

Of the nearly 4,000 exonerations in the National Registry of Exonerations, more than a quarter, roughly 1120 cases, were tainted by false or misleading forensic evidence.[41] Recognizing this problem, the National Institute of Justice (“NIJ”) enlisted Dr. John Morgan, an independent research consultant, to determine the impact faulty forensics had on exonerations.[42] In all, Dr. Morgan analyzed 732 cases, encompassing 1,391 forensic examinations across 34 distinct forensic disciplines.[43] His research revealed that errors appeared in 635 of the 732 cases– nearly 86%– and 891 of the 1,391 forensic examinations, roughly 64%.[44] In bitemark analysis, errors were identified in 77% of cases, compared to 58% for bloodstain analysis and 59% for hair comparison.[45] 

In 2009, the National Academy of Sciences (“NAS”) issued a report examining multiple areas of forensic science and identifying ways they could be improved. Regarding forensic odontology, the report stated that there is “no evidence of an existing scientific basis for identifying an individual to the exclusion of all others,” demonstrating one of the major flaws of bitemark analysis.[46] Later, in 2016, the President’s Council of Advisors on Science and Technology concluded that examiners not only cannot identify the source of bitemarks within a reasonable degree of certainty, but they cannot agree on whether an abrasion is even a human bitemark.[47] Finally, in 2023, a report by the National Institute of Standards and Technology reaffirmed these concerns, noting that bitemark analysis “lacks sufficient scientific foundation,”[48] and that “bitemark examiners may not agree on the interpretation of a specific bitemark,” among other notable findings.[49] Together, these reports make it clear that bitemark analysis is an unreliable forensic discipline with serious methodological and interpretive limitations. 

Hair microscopy has also faced recent scrutiny for its lack of scientific reliability. Most notably, in 2009, the National Academy of Sciences report stated that, without DNA analysis, there was “no scientific support” for using hair comparison to identify individuals.[50] Then, in 2015, the FBI concluded that at least 90% of FBI analyst testimony in the past 20 years on microscopic hair testimony contained errors.[51] These errors stem from the nature of the analysis itself; for example, hair microscopy examines features such as pigment, shaft form, color, and chemical treatment, all of which are highly subjective and dependent on the examiner’s interpretation.[52] As a result, hair microscopy remains another forensic discipline that is unreliable.  

Finally, bloodstain analysis represents the last discipline that has been examined for its reliability issues and interpretive challenges. Yet again, the 2009 NAS report highlighted the uncertainties in this field, stating that they were “enormous,”[53] and that the expert opinions on the matter were “more subjective than scientific.”[54] Later, in 2022, the NIJ concluded that the error rate, while not overwhelming, was still significant, averaging 11.2% across practicing analysts.[55] Additionally, when errors occurred, they were confirmed by an second analyst between 18% and 34% of the time, indicating that mistakes can persist even with a second glance. 

Conclusion

Kennedy Brewer’s case, along with thousands of other exonerees, highlights the very real consequences of relying on forensic methods that lack scientific backing. Bitemark analysis, hair microscopy, and bloodstain analysis each demonstrate the weaknesses of forensic science, from their unreliable methods to high error rates to their susceptibility to subjective expert interpretation. 

Despite this seemingly grim picture, a potential solution lies in the recent amendments to the FRE 702 in 2023. These amendments reinforce the Daubert standard of the judge’s pivotal role as a gatekeeper in the admission of testimony.[56] Specifically, the amendment clarifies that the proponent of the expert testimony bears the burden of establishing, by a preponderance of the evidence, that the testimony meets the four prongs of 702(a) – (d).[57] The second change to the rule, 702(d), now requires the expert’s opinion to reflect a reliable application of the principles and methods to the facts of the case, clarifying that even a valid method must be properly connected to the specific facts to be admissible.[58] 

Hopefully, this change will ensure that judges are less likely to admit bad science now that experts have to meet a threshold preponderance standard, and when an expert’s testimony is allowed, the underlying method is properly applied to the facts of the case. 

 

Gavel and Evidence Form, How Does the Law Impact Regulation of Forensic Evidence? Center for Statistics and Applications in Forensic Evidence (Jan. 26, 2018), https://forensicstats.org/blog/2018/01/26/law-impact-regulation-forensic-evidence/.

[1] Innocence Project, Kennedy Brewer, https://innocenceproject.org/cases/kennedy-brewer/.

[2] The University of Mississippi, Mississippi Innocence, Vimeo (2011), https://egrove.olemiss.edu/southdocs/8/.

[3] Supra Note 1.

[4] Id.

[5] Id.

[6] Id.

[7] Id.

[8] Id.

[9] Id.

[10] See generally National Registry of Exonerations, Explore Exonrations, https://exonerationregistry.org/cases?f%5B0%5D=n_pre_1989%3A0.

[11] Toni Besselaar, The CSI Effect: Media Influence on Juror Perceptions of Forensic Evidence (Dec. 6, 2024) (B.A. thesis, Scripps College) (on file with Claremont Colleges).

[12] Jonathan J. Koehler et. al., Science Technology, or the Expert Witness: What Influences Jurors’ Judgements About Forensic Science Testimony?, 22 Psych., Pub. Pol’y & L. 401, 410 (2016).

[13] William C. Thompson, Suzanna O. Kaasa & Tiamoyo Peterson, Do Jurors Give Appropriate Weight to Forensic Identification Evidence, 10 J. of Empirical L. Stud. 359, 386 (2013).

[14] Id. at 387.

[15] Henry Goddard, Forensic’s Blog, https://forensicfield.blog/henry-goddard/.

[16] Id.

[17] Id.

[18] Id.

[19] Id.

[20] Division of Criminal Justice: The Fingerprint System, N.Y State, https://www.criminaljustice.ny.gov/ojis/history/fp_sys.htm.

[21] Id.

[22] Id.

[23] John Fuller, How Lochard’s Exchange Principle Works, Howstuffworks, https://science.howstuffworks.com/locards-exchange-principle2.htm.

[24] Center for Statistics and Applications in Forensic Evidence, Dean v. Commonwealth, https://forensicstats.org/cases/dean-v-commonwealth-32-gratt-912-va-1879/.

[25] Francine Uenuma, The First Criminal Trial That Used Fingerprints as Evidence, Smithsonian Mag. (Dec. 5, 2018), https://www.smithsonianmag.com/history/first-case-where-fingerprints-were-used-evidence-180970883/#:~:text=the%20whole%20country.%E2%80%9D-,People%20v.,was%20presented%20to%20a%20jury.

[26] Frye v. United States, 293 F. 1013, 1013 (D.C. Cir. 1923).

[27] Id.

[28] Christine Funk, Daubert vs. Frye: A State-by-State Comparison, Expert Inst. (July 10, 2024), https://www.expertinstitute.com/resources/insights/daubert-versus-frye-a-national-look-at-expert-evidentiary-standards/.

[29] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 582 (1993).

[30] Id. at 584.

[31] Id.

[32] Id. at 587.

[33] Id. at 588-89.

[34] Id. at 588.

[35] Id. at 593.

[36] Id.

[37] Id. at 594.

[38] Id.

[39] Mark A. Behrens & Andrew J. Trask, Federal Rule of Evidence 702: A History and Guide to the 2023 Amendments Governing Expert Evidence, 12 Tex. A&M L. Rev. 43, 44 (2024).

[40] David E. Bernstein and Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702, 57 Wm. & Mary L. Rev. 1, 7 (2015).

[41] Supra Note 10.

[42] The National Institute of Justice, The Impact of False of Misleading Forensic Evidence on Wrongful Convictions (Nov. 28, 2023), https://nij.ojp.gov/topics/articles/impact-false-or-misleading-forensic-evidence-wrongful-convictions#note4.

[43]  Id.

[44]  Id.

[45]  Id.

[46] National Research Council, Strengthening Forensic Science in the United States: A Path Forward 176 (2009), https://www.ojp.gov/pdffiles1/nij/grants/228091.pdf.

[47] The President’s Council on Science and Technology, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods 9 (Exec. Office of the President Sept., 2016),  https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science_report_final.pdf.

[48] The National Institute of Standards and Technology, Bitemark Analysis: A NIST Scientific Foundation Review, NIST Interagency Rep. No. 8352, at 4 (Mar. 2023), https://nvlpubs.nist.gov/nistpubs/ir/2023/NIST.IR.8352.pdf.

[49] Id. at 3.

[50] Supra 62, at 161.

[51] The Federal Bureau of Investigation, FBI Testimony on Microscopic Hair Analysis Contained Errors in at Least 90 Percent of Cases in Ongoing Review (Apr. 20, 2015), https://www.fbi.gov/news/press-releases/fbi-testimony-on-microscopic-hair-analysis-contained-errors-in-at-least-90-percent-of-cases-in-ongoing-review.

[52] Marian Farah, Hair Microscopy Analysis: A Junk Science That Dominated Forensics for Decades, Great North Innocence Project (Nov. 9, 2022), https://www.greatnorthinnocenceproject.org/new-blog/hairanalysis.

[53] Supra Note 62, at 179.

[54] Id. at 178.

[55] The National Institute of Justice, Study Assesses the Accuracy and Reproducibility of Bloodstain Pattern Analysis (Dec. 14, 2022), https://nij.ojp.gov/topics/articles/study-assesses-accuracy-and-reproducibility-bloodstain-pattern-analysis.

[56] Supra Note 53, at 47-48.

[57] Id. at 46-47.

[58] Id. at 47-48.

Leave a Reply