Skip to content

Accounting for individual differences among decision-makers with applications to the evaluation of forensic evidence

Published: 2019
Primary Author: Amanda Luby
Research Area: Latent Print

Forensic science often involves the comparison of crime-scene evidence to a known-source sample to determine if the evidence arose from the same source as the reference sample. Common examples include determining if a fingerprint or DNA was left by a suspect, or if a bullet was fired from a specific gun. Even as forensic measurement and analysis tools become increasingly accurate and objective, final source decisions are often left to individual examiners’ interpretation of the evidence (President’s Council of Advisors on Science and Technology, 2016). The current approach to characterizing uncertainty in forensic decision-making has largely centered around the calculation of error rates, which is problematic when different examiners respond to different sets of items, as their error rates are not directly comparable. Furthermore, forensic analyses often consist of a series of steps. While some steps may be straightforward and relatively objective, substantial variation may exist in more subjective decisions. The goal of this dissertation is to adapt and implement statistical models for human decisionmaking for the forensic science domain. Item Response Theory (IRT), a class of statistical methods used prominently in psychometrics and educational testing, is one approach that accounts for differences among decision-makers and additionally accounts for varying difficulty among decision-making tasks. By casting forensic decision-making tasks in the IRT framework, well-developed statistical methods, theory, and tools become available. However, substantial differences exist between forensic decision-making tasks and standard IRT applications such as educational testing. I focus on three developments in IRT for forensic settings: (1) modeling sequential responses explicitly, (2) determining expected answers from responses when an answer key does not exist, and (3) incorporating self-reported assessments of performance into the model. While this dissertation focuses on fingerprint analysis, specifically the FBI Black Box study (Ulery et al., 2011), methods are broadly applicable to other forensic domains in which subjective decisionmaking plays a role, such as bullet comparisons, DNA mixture interpretation, and handwriting analysis.

Related Resources

Does Image Editing Improve the Quality of Latent Prints? An Analysis of Image‐Enhancement Techniques in One Crime Laboratory

Does Image Editing Improve the Quality of Latent Prints? An Analysis of Image‐Enhancement Techniques in One Crime Laboratory

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022
What types of information can and do latent print examiners review? A survey of practicing examiners

What types of information can and do latent print examiners review? A survey of practicing examiners

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022.
Investigative Leads in Latent Prints: A Comparison of Laboratory Procedures

Investigative Leads in Latent Prints: A Comparison of Laboratory Procedures

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022.
Characterizing verification and blind proficiency testing at forensic laboratories

Characterizing verification and blind proficiency testing at forensic laboratories

The 2014 Bureau of Justice survey of publicly funded forensic crime laboratories found that while 97% of the country’s 409 public forensic labs reported using some kind of proficiency testing,…