Skip to content

IRT for Forensics

Type: Webinar
Research Area: Latent Print

This CSAFE webinar was held on April 8, 2021.

Presenter:

Amanda Luby
Assistant Professor of Statistics, Swarthmore College

Presentation Description:

In this webinar, Amanda Luby explored how Item Response Theory (IRT), a class of statistical methods used prominently in educational testing, can be used to measure participant proficiency in error rate studies. Using the FBI “Black Box” data, Luby illustrated the strengths of an IRT-based analysis over traditional “percent correct” scoring.

The FBI “Black Box” Study (Ulery, 2011) was designed to estimate casework error rates of latent print comparisons in the United States, and additional “Black Box” studies have been called for in other pattern evidence disciplines to estimate error rates. While such studies provide error rate estimates aggregated over all examiners, we cannot directly compare individual examiners’ error rates, since each participant typically is asked to evaluate a random subset of comparison tasks (items) and some items are more difficult than others.

IRT estimates proficiency among participants while simultaneously accounting for varying difficulty among items. Using an IRT-based analysis, we find that the largest variability in examiner decisions occurs in print quality assessments and inconclusive decisions. We also find some participants were likely to over- or under-report difficulty even after accounting for their proficiency, item difficulty, and other participants’ reported difficulty; and examiners who report items to be more difficult perform similarly to examiners who report items to be easier. These results underscore the importance of better understanding the cognitive factors involved in latent print examination decisions.

Related Resources

Commentary on Curley et al. Assessing cognitive bias in forensic decisions: a review and outlook

Commentary on Curley et al. Assessing cognitive bias in forensic decisions: a review and outlook

In their recent critical review titled “Assessing Cognitive Bias in Forensic Decisions: A Review and Outlook,” Curley et al. (1) offer a confused and incomplete discussion of “task relevance” in…
A Survey of Fingerprint Examiners' Attitudes towards Probabilistic Reporting

A Survey of Fingerprint Examiners' Attitudes towards Probabilistic Reporting

This CSAFE webinar was held on September 22, 2021. Presenter: Simon Cole University of California, Irvine Presentation Description: Over the past decade, with increasing scientific scrutiny on forensic reporting practices,…
Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance

Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance

Calls for blind proficiency testing in forensic science disciplines intensified following the 2009 National Academy of Sciences report and were echoed in the 2016 report by the President’s Council of…
CSAFE 2021 Field Update

CSAFE 2021 Field Update

The 2021 Field Update was held June 14, 2021, and served as the closing to the first year of CSAFE 2.0. CSAFE brought together researchers, forensic science partners and interested…