In their recent critical review titled “Assessing Cognitive Bias in Forensic Decisions: A Review and Outlook,” Curley et al. (1) offer a confused and incomplete discussion of “task relevance” in forensic science. Their failure to adopt a clear and appropriate definition of “task relevance” undermines the central conclusion of their article—the assertion that it is not necessarily an error for forensic scientists to rely on task-irrelevant information and that “task-irrelevant contextual information may sometimes aid forensic decision makers.” This conceptual flaw in the article becomes clear when we define “task relevance” appropriately, in the manner it was defined by the U.S. National Commission on Forensic Science (2). The Commission’s definition provides a bright-line standard for distinguishing contextual information that is helpful and should be considered from contextual information that is unhelpful and should not be considered. Once that matter is clarified, it becomes possible to discuss intelligently whether steps should be taken to minimize examiners’ exposure to task irrelevant information in order to reduce the potential for contextual bias
Commentary on Curley et al. Assessing cognitive bias in forensic decisions: a review and outlook
Journal: Journal of Forensic Sciences
Published: 2020
Primary Author: William C. Thompson
Type: Publication
Research Area: Latent Print
Related Resources
What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency
Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae…
An alternative statistical framework for measuring proficiency
Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for…
Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles
The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions…
Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview
The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was…