This CSAFE webinar was held on April 8, 2021.
Presenter:
Amanda Luby
Assistant Professor of Statistics, Swarthmore College
Presentation Description:
In this webinar, Amanda Luby explored how Item Response Theory (IRT), a class of statistical methods used prominently in educational testing, can be used to measure participant proficiency in error rate studies. Using the FBI “Black Box” data, Luby illustrated the strengths of an IRT-based analysis over traditional “percent correct” scoring.
The FBI “Black Box” Study (Ulery, 2011) was designed to estimate casework error rates of latent print comparisons in the United States, and additional “Black Box” studies have been called for in other pattern evidence disciplines to estimate error rates. While such studies provide error rate estimates aggregated over all examiners, we cannot directly compare individual examiners’ error rates, since each participant typically is asked to evaluate a random subset of comparison tasks (items) and some items are more difficult than others.
IRT estimates proficiency among participants while simultaneously accounting for varying difficulty among items. Using an IRT-based analysis, we find that the largest variability in examiner decisions occurs in print quality assessments and inconclusive decisions. We also find some participants were likely to over- or under-report difficulty even after accounting for their proficiency, item difficulty, and other participants’ reported difficulty; and examiners who report items to be more difficult perform similarly to examiners who report items to be easier. These results underscore the importance of better understanding the cognitive factors involved in latent print examination decisions.