Skip to content

Consensus on validation of forensic voice comparison

Journal: Science & Justice
Published: 2021
Primary Author: Geoffrey Stewart Morrison
Secondary Authors: Ewald Enzinger, Vincent Hughes, Michael Jessen, Didier Meuwly, Cedric Neumann, S. Planting, William C. Thompson, David van der Vloed, Rolf J.F. Ypma, Cuiling Zhang
Research Area: Forensic Statistics

Since the 1960s, there have been calls for forensic voice comparison to be empirically validated under casework conditions. Since around 2000, there have been an increasing number of researchers and practitioners who conduct forensic-voice-comparison research and casework within the likelihood-ratio framework. In recent years, this community of researchers and practitioners has made substantial progress toward validation under casework conditions becoming a standard part of practice: Procedures for conducting validation have been developed, along with graphics and metrics for representing the results, and an increasing number of papers are being published that include empirical validation of forensic-voice-comparison systems under conditions reflecting casework conditions. An outstanding question, however, is: In the context of a case, given the results of an empirical validation of a forensic-voice-comparison system, how can one decide whether the system is good enough for its output to be used in court? This paper provides a statement of consensus developed in response to this question. Contributors included individuals who had knowledge and experience of validating forensic-voice comparison systems in research and/or casework contexts, and individuals who had actually presented validation results to courts. They also included individuals who could bring a legal perspective on these matters, and individuals with knowledge and experience of validation in forensic science more broadly. We provide recommendations on what practitioners should do when conducting evaluations and validations, and what they should present to the court. Although our focus is explicitly on forensic voice comparison, we hope that this contribution will be of interest to an audience concerned with validation in forensic science more broadly. Although not written specifically for a legal audience, we hope that this contribution will still be of interest to lawyers.

Related Resources

Forensic Footwear: A Retrospective of the Development of the MANTIS Shoe Scanning System

Forensic Footwear: A Retrospective of the Development of the MANTIS Shoe Scanning System

There currently are no shoe-scanning devices developed in the United States that can operate in a real-world, variable-weather environment in real-time. Forensics-focused groups, including the NIJ, expressed the need for…
A Quantitative Approach for Forensic Footwear Quality Assessment using Machine and Deep Learning

A Quantitative Approach for Forensic Footwear Quality Assessment using Machine and Deep Learning

Forensic footwear impressions play a crucial role in criminal investigations, assisting in possible suspect identification. The quality of an impression collected from a crime scene directly impacts the forensic information…
Enhancing forensic shoeprint analysis: Application of the Shoe-MS algorithm to challenging evidence

Enhancing forensic shoeprint analysis: Application of the Shoe-MS algorithm to challenging evidence

Quantitative assessment of pattern evidence is a challenging task, particularly in the context of forensic investigations where the accurate identification of sources and classification of items in evidence are critical.…
Computational Shoeprint Analysis for Forensic Science

Computational Shoeprint Analysis for Forensic Science

Shoeprints are a common type of evidence found at crime scenes and are regularly used in forensic investigations. However, their utility is limited by the lack of reference footwear databases…