Skip to content

Modeling Covarying Responses in Complex Tasks

Conference/Workshop:
Quantitative Psychology: The 86th Annual International Meeting of the Psychometric Society
Published: 2022
Primary Author: Amanda Luby
Research Area: Latent Print

In testing situations, participants are often asked for supplementary re- sponses in addition to the primary response of interest, which may in- clude quantities like confidence or reported difficulty. These additional responses can be incorporated into a psychometric model either as a predictor of the main response or as a secondary response. In this paper we explore both of these approaches for incorporating participant’s re- ported difficulty into a psychometric model using an error rate study of fingerprint examiners. Participants were asked to analyze print pairs and make determinations about the source, which can be scored as correct or incorrect decisions. Additionally, participants were asked to report the difficulty of the print pair on a five point scale. In this paper, we model (a) the responses of individual examiners without incorporating reported difficulty using a Rasch model, (b) the responses using their reported dif- ficulty as a predictor, and (c) the responses and their reported difficulty as a multivariate response variable. We find that approach (c) results in more balanced classification errors, but incorporating reported difficulty using either approach does not lead to substantive changes in proficiency or difficulty estimates. These results suggest that, while there are indi- vidual differences in reported difficulty, these differences appear to be unrelated to examiners’ proficiency in correctly distinguishing matched from non-matched fingerprints.

Related Resources

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
An alternative statistical framework for measuring proficiency

An alternative statistical framework for measuring proficiency

Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for…
Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions…
Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was…