Forensic science often involves the evaluation of crime-scene evidence to determine whether it matches a known-source sample, such as whether a fingerprint or DNA was left by a suspect or if a bullet was fired from a specific firearm. Even as forensic measurement and analysis tools become increasingly automated and objective, final source decisions are often left to individual examiners’ interpretation of the evidence. Furthermore, forensic analyses often consist of a series of steps. While some of these steps may be straightforward and relatively objective, substantial variation may exist in more subjective decisions. The current approach to characterizing uncertainty in forensic decision-making has largely centered around conducting error rate studies (in which examiners evaluate a set of items consisting of known-source comparisons) and calculating error rates aggregated across examiners and identification tasks. We propose a new approach using Item Response Theory (IRT) and IRT-like models to account for differences in examiner behavior and for varying difficulty among identification tasks. There are, however, substantial differences between forensic decision-making and traditional IRT applications such as educational testing. For example, the structure of the response process must be considered, “answer keys” for comparison tasks do not exist, and information about participants and items is not available due to privacy constraints. In this paper, we provide an overview of forensic decision-making, outline challenges in applying IRT in practice, and survey some recent advances in the application of Bayesian psychometric models to fingerprint examiner behavior.
Psychometrics for Forensic Fingerprint Comparisons

Journal: Quantitative Psychology. Springer Proceedings in Mathematics & Statistics, vol 353
Published: 2021
Primary Author: Amanda Luby
Secondary Authors: Anjali Mazumder, Brian Junker
Type: Publication
Related Resources
Does image editing improve the quality of latent prints? An analysis of image-editing techniques in one crime laboratory
Field research within latent print comparison has remained sparse in the context of an otherwise growing body of literature examining the discipline. Studies examining how ACE-V procedures are implemented within…
Reply to Response to Vacuous standards – Subversion of the OSAC standards-development process
This Letter to the Editor is a reply to Mohammed et al. (2021) https://doi.org/10.1016/j.fsisyn.2021.100145, which in turn is a response to Morrison et al. (2020) “Vacuous standards – subversion of…
Jury Perception of Bullet Matching Algorithms and Demonstrative Evidence
Presented at Joint Statistical Meetings
Modeling Covarying Responses in Complex Tasks
In testing situations, participants are often asked for supplementary re- sponses in addition to the primary response of interest, which may in- clude quantities like confidence or reported difficulty. These…