Skip to content

IRT for Forensics

Type: Webinar
Research Area: Latent Print

This CSAFE webinar was held on April 8, 2021.

Presenter:

Amanda Luby
Assistant Professor of Statistics, Swarthmore College

Presentation Description:

In this webinar, Amanda Luby explored how Item Response Theory (IRT), a class of statistical methods used prominently in educational testing, can be used to measure participant proficiency in error rate studies. Using the FBI “Black Box” data, Luby illustrated the strengths of an IRT-based analysis over traditional “percent correct” scoring.

The FBI “Black Box” Study (Ulery, 2011) was designed to estimate casework error rates of latent print comparisons in the United States, and additional “Black Box” studies have been called for in other pattern evidence disciplines to estimate error rates. While such studies provide error rate estimates aggregated over all examiners, we cannot directly compare individual examiners’ error rates, since each participant typically is asked to evaluate a random subset of comparison tasks (items) and some items are more difficult than others.

IRT estimates proficiency among participants while simultaneously accounting for varying difficulty among items. Using an IRT-based analysis, we find that the largest variability in examiner decisions occurs in print quality assessments and inconclusive decisions. We also find some participants were likely to over- or under-report difficulty even after accounting for their proficiency, item difficulty, and other participants’ reported difficulty; and examiners who report items to be more difficult perform similarly to examiners who report items to be easier. These results underscore the importance of better understanding the cognitive factors involved in latent print examination decisions.

Related Resources

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
An alternative statistical framework for measuring proficiency

An alternative statistical framework for measuring proficiency

Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for…
Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions…
Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was…