Skip to content

Psychometrics for Forensic Fingerprint Comparisons

Journal: Quantitative Psychology. Springer Proceedings in Mathematics & Statistics, vol 353
Published: 2021
Primary Author: Amanda Luby
Secondary Authors: Anjali Mazumder, Brian Junker

Forensic science often involves the evaluation of crime-scene evidence to determine whether it matches a known-source sample, such as whether a fingerprint or DNA was left by a suspect or if a bullet was fired from a specific firearm. Even as forensic measurement and analysis tools become increasingly automated and objective, final source decisions are often left to individual examiners’ interpretation of the evidence. Furthermore, forensic analyses often consist of a series of steps. While some of these steps may be straightforward and relatively objective, substantial variation may exist in more subjective decisions. The current approach to characterizing uncertainty in forensic decision-making has largely centered around conducting error rate studies (in which examiners evaluate a set of items consisting of known-source comparisons) and calculating error rates aggregated across examiners and identification tasks. We propose a new approach using Item Response Theory (IRT) and IRT-like models to account for differences in examiner behavior and for varying difficulty among identification tasks. There are, however, substantial differences between forensic decision-making and traditional IRT applications such as educational testing. For example, the structure of the response process must be considered, “answer keys” for comparison tasks do not exist, and information about participants and items is not available due to privacy constraints. In this paper, we provide an overview of forensic decision-making, outline challenges in applying IRT in practice, and survey some recent advances in the application of Bayesian psychometric models to fingerprint examiner behavior.

Related Resources

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
An alternative statistical framework for measuring proficiency

An alternative statistical framework for measuring proficiency

Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for…
Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions…
Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was…