Every scientific technique features some error, and legal standards for the admissibility of scientific evidence (e.g., Daubert v. Merrill Dow Pharmaceuticals, Inc., 1993; Kumho Tire Co v. Carmichael, 1999) guide trial courts to consider known error rates. However, recent reviews of forensic science conclude that error rates for some common techniques are not well-documented or even established (e.g., NAS, 2009; PCAST, 2016). Furthermore, many forensic analysts have historically denied the presence of error in their field. Therefore, it is important to establish what forensic scientists actually know or believe about errors rates in their disciplines. We surveyed 183 practicing forensic analysts to examine what they think and estimate about error rates in their various disciplines. Results revealed that analysts perceive all types of errors to be rare, with false positive errors even more rare than false negatives. Likewise, analysts typically reported that they prefer to minimize the risk of false positives over false negatives. Most analysts could not specify where error rates for their discipline were documented or published. Their estimates of error in their fields were widely divergent – with some estimates unrealistically low.
Perceptions and estimates of error rates in forensic science: A survey of forensic analysts
Journal: Forensic Science International
Published: 2019
Primary Author: Daniel C. Murrie
Secondary Authors: Brett O. Gardner, Sharon Kelley, Itiel E. Dror
Type: Publication
Research Area: Implementation and Practice
Related Resources
Demonstrative Evidence and the Use of Algorithms in Jury Trials
We investigate how the use of bullet comparison algorithms and demonstrative evidence may affect juror perceptions of reliability, credibility, and understanding of expert witnesses and presented evidence. The use of…
Interpretable algorithmic forensics
One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that…
What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency
Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence
Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection…