Skip to content

Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance

Journal: Forensic Science International
Published: 2021
Primary Author: Brett O. Gardner
Secondary Authors: Maddisen Neuman, Sharon Kelley
Research Area: Latent Print

Calls for blind proficiency testing in forensic science disciplines intensified following the 2009 National Academy of Sciences report and were echoed in the 2016 report by the President’s Council of Advisors on Science and Technology. Both practitioners and scholars have noted that “open” proficiency tests, in which analysts know they are being tested, allow for test-taking behavior that is not representative of behavior in routine casework. This study reports the outcomes of one laboratory’s blind quality control (BQC) program. Specifically, we describe results from approximately 2.5 years of blind cases in the latent print section (N = 376 latent prints submitted as part of 144 cases). We also used a widely available quality metrics software (LQMetrics) to explore relationships between objective print quality and case outcomes. Results revealed that nearly all BQC prints (92.0%) were of sufficient quality to enter into AFIS. When prints had a source present in AFIS, 41.7% of print searches resulted in a candidate list containing the true source. Examiners committed no false positive errors but other types of errors were more common. Average print quality was in the midpoint of the range (53.4 on a 0-to-100 scale), though prints were evenly distributed across the Good, Bad, and Ugly categories. Quality metrics were significantly associated with sufficiency determinations, examiner conclusions, and examiner accuracy. Implications for blind testing and the use of quality metrics in routine casework as well as proficiency testing are discussed.

Related Resources

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
An alternative statistical framework for measuring proficiency

An alternative statistical framework for measuring proficiency

Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for…
Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions…
Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was…