Skip to content

Accounting for individual differences among decision-makers with applications to the evaluation of forensic evidence

Published: 2019
Primary Author: Amanda Luby
Research Area: Latent Print

Forensic science often involves the comparison of crime-scene evidence to a known-source sample to determine if the evidence arose from the same source as the reference sample. Common examples include determining if a fingerprint or DNA was left by a suspect, or if a bullet was fired from a specific gun. Even as forensic measurement and analysis tools become increasingly accurate and objective, final source decisions are often left to individual examiners’ interpretation of the evidence (President’s Council of Advisors on Science and Technology, 2016). The current approach to characterizing uncertainty in forensic decision-making has largely centered around the calculation of error rates, which is problematic when different examiners respond to different sets of items, as their error rates are not directly comparable. Furthermore, forensic analyses often consist of a series of steps. While some steps may be straightforward and relatively objective, substantial variation may exist in more subjective decisions. The goal of this dissertation is to adapt and implement statistical models for human decisionmaking for the forensic science domain. Item Response Theory (IRT), a class of statistical methods used prominently in psychometrics and educational testing, is one approach that accounts for differences among decision-makers and additionally accounts for varying difficulty among decision-making tasks. By casting forensic decision-making tasks in the IRT framework, well-developed statistical methods, theory, and tools become available. However, substantial differences exist between forensic decision-making tasks and standard IRT applications such as educational testing. I focus on three developments in IRT for forensic settings: (1) modeling sequential responses explicitly, (2) determining expected answers from responses when an answer key does not exist, and (3) incorporating self-reported assessments of performance into the model. While this dissertation focuses on fingerprint analysis, specifically the FBI Black Box study (Ulery et al., 2011), methods are broadly applicable to other forensic domains in which subjective decisionmaking plays a role, such as bullet comparisons, DNA mixture interpretation, and handwriting analysis.

Related Resources

Commentary on Curley et al. Assessing cognitive bias in forensic decisions: a review and outlook

Commentary on Curley et al. Assessing cognitive bias in forensic decisions: a review and outlook

In their recent critical review titled “Assessing Cognitive Bias in Forensic Decisions: A Review and Outlook,” Curley et al. (1) offer a confused and incomplete discussion of “task relevance” in…
A Survey of Fingerprint Examiners' Attitudes towards Probabilistic Reporting

A Survey of Fingerprint Examiners' Attitudes towards Probabilistic Reporting

This CSAFE webinar was held on September 22, 2021. Presenter: Simon Cole University of California, Irvine Presentation Description: Over the past decade, with increasing scientific scrutiny on forensic reporting practices,…
Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance

Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance

Calls for blind proficiency testing in forensic science disciplines intensified following the 2009 National Academy of Sciences report and were echoed in the 2016 report by the President’s Council of…
CSAFE 2021 Field Update

CSAFE 2021 Field Update

The 2021 Field Update was held June 14, 2021, and served as the closing to the first year of CSAFE 2.0. CSAFE brought together researchers, forensic science partners and interested…