Skip to content

CSAFE Researchers Contribute to PNAS Special Feature on Science, Evidence, Law, and Justice

The PNAS Science, Evidence, Law, and Justice Special Feature included articles from CSAFE researchers. Learn more about the cover: www.pnas.org/toc/pnas/120/41
The PNAS Science, Evidence, Law, and Justice Special Feature included articles from CSAFE researchers. Learn more about the cover: www.pnas.org/toc/pnas/120/41

A special feature in a recent Proceedings of the National Academy of Sciences (PNAS) issue showcased articles from Center for Statistics and Applications in Forensic Evidence (CSAFE) researchers.

The special feature, titled “Science, Evidence, Law, and Justice,” includes eight articles that discuss advancements in forensic practices and address scientific, technological, legal and ethical issues related to the use of forensic evidence in law enforcement and courtroom testimony.

According to the special feature’s introduction, the articles focus on four overlapping themes:

  1. The Intake: Operation of a crime lab, where forensic evidence enters the justice system and is subjected to a variety of analyses;
  2. The Revolution: Scientific advances that are now rewriting the script for forensic investigation;
  3. The New Threats: Risks to justice posed by new technologies that render decisions by invasive and inscrutable processes; and
  4. The Courts: Rules for use of scientific evidence by the courts and their varying interpretations by the judiciary.

Four of the articles in the special feature were written by CSAFE researchers and a CSAFE advisory board member. The abstracts of these articles, along with links to the full text, are provided below. You can access the Science, Evidence, Law, and Justice Special Feature at  https://www.pnas.org/toc/pnas/120/41.

 

The Secret Life of Crime Labs

Peter Stout, CSAFE Strategic Advisory Board member and CEO and president of the Houston Forensic Science Center

View the article at https://www.pnas.org/doi/10.1073/pnas.2303592120.

Abstract: Houston, Texas, experienced a widely known failure of its police forensic laboratory. This gave rise to the Houston Forensic Science Center (HFSC) as a separate entity to provide forensic services to the City of Houston. HFSC is a very large forensic laboratory and has made significant progress at remediating the past failures and improving public trust in forensic testing. HFSC has a large and robust blind testing program, which has provided many insights into the challenges forensic laboratories face. HFSC’s journey from a notoriously failed lab to a model also gives perspective to the resource challenges faced by all labs in the country. Challenges for labs include the pervasive reality of poor-quality evidence. Also, that forensic laboratories are necessarily part of a much wider system of interdependent functions in criminal justice making blind testing something in which all parts have a role. This interconnectedness also highlights the need for an array of oversight and regulatory frameworks to function properly. The major essential databases in forensics need to be a part of blind testing programs and work is needed to ensure that the results from these databases are indeed producing correct results and those results are being correctly used. Last, laboratory reports of “inconclusive” results are a significant challenge for laboratories and the system to better understand when these results are appropriate, necessary and most importantly correctly used by the rest of the system.

Interpretable Algorithmic Forensics

Brandon Garrett, CSAFE co-director and director of the Wilson Center for Science and Justice at Duke University
Cynthia Rudin, professor of Computer Science at Duke University,

View the article at https://www.pnas.org/doi/10.1073/pnas.2301842120.

Abstract: One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.

Scientific Guidelines for Evaluating the Validity of Forensic Feature-comparison Methods

Nicholas Scurich, CSAFE researcher and professor of criminology, law and society at the University of California, Irvine
David L. Faigman, chancellor and dean at the University of California College of the Law, San Francisco
Thomas D. Albright, professor and director at the Salk Institute for Biological Studies

View the article at https://www.pnas.org/doi/10.1073/pnas.2301843120.

Abstract: When it comes to questions of fact in a legal context—particularly questions about measurement, association, and causality—courts should employ ordinary standards of applied science. Applied sciences generally develop along a path that proceeds from a basic scientific discovery about some natural process to the formation of a theory of how the process works and what causes it to fail, to the development of an invention intended to assess, repair, or improve the process, to the specification of predictions of the instrument’s actions and, finally, empirical validation to determine that the instrument achieves the intended effect. These elements are salient and deeply embedded in the cultures of the applied sciences of medicine and engineering, both of which primarily grew from basic sciences. However, the inventions that underlie most forensic science disciplines have few roots in basic science, and they do not have sound theories to justify their predicted actions or results of empirical tests to prove that they work as advertised. Inspired by the “Bradford Hill Guidelines”—the dominant framework for causal inference in epidemiology—we set forth four guidelines that can be used to establish the validity of forensic comparison methods generally. This framework is not intended as a checklist establishing a threshold of minimum validity, as no magic formula determines when particular disciplines or hypotheses have passed a necessary threshold. We illustrate how these guidelines can be applied by considering the discipline of firearm and tool mark examination.

Shifting Decision Thresholds Can Undermine the Probative Value and Legal Utility of Forensic Pattern-matching Evidence

William C. Thompson, CSAFE researcher and professor emeritus of criminology, law and society at the University of California, Irvine

View the article at https://www.pnas.org/doi/10.1073/pnas.2301844120.

Abstract: Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection theory to model examiners’ reported conclusions (e.g., identification, inconclusive, or exclusion), focusing on the connection between the examiner’s decision threshold and the probative value of the forensic evidence. It uses a Bayesian network model to explore how shifts in decision thresholds may affect rates and ratios of true and false convictions in a hypothetical legal system. It demonstrates that small shifts in decision thresholds, which may arise from contextual bias, can dramatically affect the value of forensic pattern-matching evidence and its utility in the legal system.

FROM THE BLOG