Skip to content

Latent Print Analysis

Overarching GOALS

Fingerprints have been used as evidence for decades, and their probative value has been reaffirmed in countless legal decisions. They remain the most common form of pattern evidence analysis. CSAFE research focuses on improving methods of latent print analysis and examination through investigation of quality metrics and the impact of image quality on examiner conclusions, the role of proficiency testing and the influence of forensic processing in crime labs.

karen_kafadar_02_da_500x500

Karen Kafadar

Commonwealth Professor and Chair, Co-Director of CSAFE

University of Virginia

Dr. Robin Mejia

Robin Mejia

Director, Program in Statistics and Human Rights, Co-Director of CSAFE

Carnegie Mellon University

SharonKelley_web

Sharon Kelley

Assistant Professor

University of Virginia

GardnerBrett_web

Brett Gardner

Post Doctoral Researcher

University of Virginia

SimonColeHi_web

Simon Cole

Professor

University of California, Irvine

amandaluby

Amanda Luby

Assistant Professor

Swarthmore College

Adele Quigley-McBride

Adele Quigley-McBride

Assistant Professor

Simon Fraser University

Additional Team Members

Keith Inman keith.inman@csueastbay.edu

Daniel Murrie
murrie@virginia.edu

Brandon Garrett
bgarrett@law.duke.edu

Robert Ramotowski (NIST) robert.ramotowski@nist.gov

focus Areas

CSAFE aims to correlate project phase I metrics with “accuracy of call” based on actual practice. Past studies of LPE accuracy have involved one step of the ACE-V process, with LPEs who knew they were participating in a study: Compare two prints and decide if the sources are the same, different or inconclusive. In this study, we will run blind samples through actual lab processes, where the study participant is unaware that the print is a test case.

Latent print examiners (LPEs) recognize the connection between “accuracy of call” and number/quality of features used to assess the evidence. The first step of ACE-V requires a subjective “quality.” SWGFAST developed a “sufficiency chart” that showed contours for “poor,” “adequate,” and “very good” accuracy as a function of number of minutiae in a latent fingerprint image and their “quality.” This chart was not based on data: “quality” was not defined, and the contours for the regions were based on “expert opinion.”

Phase 1 of this project developed a new quality metric and implemented it, and two others, in computer code. The metrics are objective (same quality scores with same input) and have been calculated on prints from two sources of data (NIST and HFSC).

Phase 2 aims to correlate these metrics with “accuracy of call” based on actual practice. Past studies of LPE accuracy have involved one step of the ACE-V process, with LPEs who knew they were participating in a study: Compare two prints and decide if the sources are the same, different or inconclusive. In this study, we will run blind samples through actual lab processes, where the study participant is unaware that the print is a test case. The print may have a “hit” in the crime lab’s database, or it may not (where the correct conclusion should be “no hit”). To affect real-world studies, we start with the relatively large quality assurance program at HFSC, using their test blinds as well as prints from subjects that have no match in any database. 

Quality Metric Algorithms for Fingerprint Images: CSAFE created a webpage to assist lab managers in the assessment part of the latent print analysis process. It is a collection of several open-source quality metric algorithms with links to additional information and relevant papers. New algorithms will be added to this webpage as they are developed. 

This project will expand upon previous work through the analysis of existing data (including the FBI “Black Box” and “White Box” studies and additional proficiency test results) and the development of more complex IRT models suited to forensic tasks (e.g., incorporating the “verification” step of the ACE-V process for latent print examination). The project will culminate with the development and analysis of a pilot “performance” proficiency exam for latent print examiners that could be used to estimate error rates across different classes of comparisons as well as provide feedback to latent print examiners.

Fingerprints remain the most common form of pattern evidence, and proficiency tests play a key role in the qualification of latent print examiners. Although proficiency tests are widely used in forensic science for training and procedural purposes (Koehler, 2013; AAAS, 2017), they are not being utilized to their full potential, as differences in difficulty across proficiency tests are not taken into account (Luby and Kadane, 2018). Furthermore, while error rate studies rates (such as the FBI “Black Box” [Ulery et.al, 2011] and “White Box” [Ulery et.al, 2014] studies) provide valuable estimates of an overall error rate aggregated across examiners, individual examiner error rates do not adjust for different participants being randomly assigned to different subsets of prints, some of which may contain more difficult comparisons than others. It is thus impossible to understand the variability of individual examiner performance using the raw error rates alone.

Item Response Theory (IRT), which is used extensively in educational testing, is one approach that measures both participant proficiency and item difficulty. CSAFE researchers have been successful in initial application of straightforward IRT to proficiency tests for latent prints (Luby and Kadane, 2018) and the FBI “Black Box” Study (Luby, 2019b), including extensions that do not require conclusions to be scored (Luby, Mazumder & Junker 2020; Luby 2019a). This project will expand upon previous work through the analysis of existing data (including the FBI “Black Box” and “White Box” studies and additional proficiency test results) and the development of more complex IRT models suited to forensic tasks (e.g. incorporating the “verification” step of the ACE-V process for latent print examination). Results from these analyses will be published in the statistics and forensic science literature and may lead to additional insights about the cognitive processes involved in pattern recognition by latent print examiners, as well as the variability of these processes within and across examiners. Open-source software for performing IRT analyses will be developed for use by proficiency test providers, forensic laboratories performing “in-house” proficiency tests, and other researchers. The project will culminate with the development and analysis of a pilot “performance” proficiency exam for latent print examiners that could be used to estimate error rates across different classes of comparisons as well as provide feedback to latent print examiners.

This project will assess types of proficiency tests used in forensic laboratories; assess analyst perceptions of declared proficiency tests; assess barriers to implementation of blind proficiency tests and develop recommendations to address them; analyze the results of blind proficiency tests; and compare declared forensic proficiency testing programs to blind proficiency testing.

Proficiency testing of analysts is widely accepted as a core component of quality assurance at testing laboratories and is a requirement for accreditation of forensic laboratories. In forensic disciplines, there are two primary types of proficiency tests: declared tests, in which analysts know they are being tested, generally with tests from a commercial vendor that do not replicate a full case; and blind tests, in which test cases are submitted as part of a laboratory’s regular case flow and analysts do not know which case is a test. 

This project will assess types of proficiency tests used in forensic laboratories; assess analyst perceptions of declared proficiency tests; assess barriers to implementation of blind proficiency tests and develop recommendations to address them; analyze the results of blind proficiency tests; and compare declared forensic proficiency testing programs to blind proficiency testing. Blind testing is widely used in other fields, but adoption has been slow at forensic laboratories, in part because cultural norms have not promoted blind testing and also because it is far more logistically challenging to create blind tests for a wide range of forensic disciplines, than, for example, to create blind drug tests. This project will address challenges to increasing the use of blind testing in forensics, support the implementation of blind testing at state, regional, and local laboratories, and assess the results of these programs.

In this project, we will work with Collaborative Testing Service (CTS) to expand on studies initiated in the first phase of CSAFE, and will continue established collaborations with two laboratories, the Houston Forensic Science Center and the Allegheny County Office of the Medical Examiner. Additionally, we will build on new relationships that were initiated during in the first phase of CSAFE to increase the use of blind proficiency testing at two additional laboratories. We will document these experiences in CSAFE white papers covering lessons learned and best practices for laboratories of a range of sizes. We will evaluate the results of these programs to provide data of use to laboratories themselves and to better assess both the value of blind proficiency testing and the results regarding LPA as performed in practice, and we will work with laboratories to develop a collaborative infrastructure to support the implementation of blinding in mid-sized and possibly smaller laboratories that do not have the infrastructure of HFSC. Our goal is to work with labs to develop their own internal processes for blind proficiency testing and not have CSAFE simply perform a one-time external blind test.

The overarching goal of this project is to better understand, and ultimately improve, forensic processing in labs and evaluate the field reliability of latent print comparison procedures. We plan to continue this line of research by (1) providing descriptive examinations of laboratory procedures in at least three laboratories with contrasting policies, (2) examining the influence of contextual factors and case processing variables across different laboratories, (3) exploring the implementation of procedural changes where possible, such as the use of triage systems based on existing quality metrics (e.g., LQMetrics) or other indicators, and (4) exploring the financial and operational costs associated with procedural changes.

Although there has been an increasing amount of research on the validity and reliability of latent print comparisons (e.g., Ulery et al., 2011; 2012; Langenburg et al., 2012), the field has much less data regarding “field reliability,” or examiner agreement in routine, real-world practice. This research is particularly important given the growing bodies of studies indicating that latent print comparison can be influenced by contextual information (e.g., Langenburg, Champod, & Wertheim, 2009; Stevenage & Bennett, 2017).

Thus, the overarching goal of this project is to better understand, and ultimately improve, forensic processing in labs and evaluate the field reliability of latent print comparison procedures. We will expand our current work with the Houston Forensic Science Center to include at least two other laboratories that will allow for both intra- and inter-laboratory comparisons and a more informed evaluation of typical practice and case flow in crime laboratories. We plan to continue this line of research by (1) providing descriptive examinations of laboratory procedures in at least three laboratories with contrasting policies, (2) examining the influence of contextual factors and case processing variables across different laboratories, (3) exploring the implementation of procedural changes where possible, such as the use of triage systems based on existing quality metrics (e.g., LQMetrics) or other indicators, and (4) exploring the financial and operational costs associated with procedural changes. We will also work to design larger experimental studies in which multiple laboratories process identical prints and their conclusions can be compared to further the goal of better understanding the field reliability of latent print comparison and how different workflows affect outcomes. We hope this work will create an exemplar for using data to improve case processing and reliability, implementing the use of quantitative metrics (e.g., objective fingerprint quality metrics) to improve accuracy and reliability, and evaluating procedural changes — including the financial impact on labs.

The research will better inform the forensic community about how and when forensic science can do the opposite of what it is intended to: corroborate, rather than refute, investigators’ suspicion of an innocent person.

One of the primary motivators for the reform of forensic science is the desire to avoid wrongful convictions. Forensic science has long been identified as one of several key contributors to wrongful convictions. But less is known about how forensic science contributes to wrongful convictions.

The National Registry of Exonerations is the largest and most authoritative public data set about exonerations in the United States. It is headquartered at the University of California, Irvine, and it is a joint project of UCI, the University of Michigan and Michigan State University. The PI is director and associate editor. 

Exonerations are not coterminous with wrongful convictions, but exonerations are the best known proxy for studying wrongful convictions. False or misleading forensic evidence is the fourth leading cause of wrongful conviction in the Registry’s data set. Over 500 cases are currently coded for this cause.

The research will better inform the forensic community about how and when forensic science can do the opposite of what it is intended to: corroborate, rather than refute, investigators’ suspicion of an innocent person.

 

Fingerprint analysts decide whether a fingerprint found at a crime scene is similar enough to a fingerprint from a known source to conclude that they came from the same individual. To do this, a fingerprint analyst looks at the overall friction ridge pattern and at smaller features within the fingerprint pattern (called minutiae). Analysts identify all visible minutiae in the fingerprints and then assess the extent to which the type and arrangement of observed minutiae correspond in both prints.                       

Observing and evaluating the number and arrangement of minutiae is essential to the fingerprint analyst’s task, but there are no standardized rules about how many corresponding minutiae are required to determine whether two fingerprints are similar enough to have come from the same source or how many dissimilarities preclude such a conclusion. In fact, studies show wide variation in how many observable minutiae analysts require before completing a full analysis and in how many corresponding minutiae analysts expect to see before concluding two fingerprints are from the same source (Ulery et al., 2013; 2014).  

Still, the count and arrangement of features within fingerprints are not the only pieces of information available to analysts. Some minutiae are extremely common (e.g., ridge endings), some are extremely rare (e.g., trifurcations), and other minutiae fall somewhere in between (e.g., hooks). But little research has documented the actual frequency of fingerprint minutiae (see Langenburg, 2011) or explored analysts’ perceptions of minutiae frequency (e.g., Osterberg, 1964). Analysts are not explicitly taught to incorporate the relative frequency of minutiae into their analytic conclusion, but as they gain work experience, analysts will inevitably form beliefs about the prevalence of different minutiae types. Yet, because people struggle to accurately estimate base rates, analysts’ perceptions are unlikely to match true base rates of minutiae. 

Given the broader literature on human cognition, fingerprint analysts are likely incorporating what they have construed about minutiae frequency into their analytic decisions, but we do not know how they do this and what effect this information may have on the nature and quality of their decisions. The current project aims to gather information about perceived and actual minutiae frequency. We hope to examine analyst beliefs about minutiae frequency and establish objective base rates for minutiae, which will ultimately allow for an evaluation of the accuracy of analyst beliefs. Then, we will investigate the influence of analyst perceptions on their decision-making as they perform fingerprint comparisons. 

Knowledge transfer

  • Type

Found 55 Results
Page 1 of 3

What’s in a Name? Consistency in Latent Print Examiners’ Naming Conventions and Perceptions of Minutiae Frequency

Type: Research Area(s): ,

Published: 2023 | By: Heidi Eldridge

Fingerprint minutia types influence LPEs’ decision-making processes during analysis and evaluation, with features perceived to be rarer generally given more weight. However, no large-scale studies comparing examiner perceptions of minutiae frequency to empirical counts exist. Additionally, examiner naming conventions for…

View on Digital Repository


An alternative statistical framework for measuring proficiency

Type: Research Area(s): ,

Published: 2023 | By: Amanda Luby

Item Response Theory, a class of statistical methods used prominently in educational testing, can be used to measure LPE proficiency in annual tests or research studies, while simultaneously accounting for varying difficulty among comparisons. Using black box studies in latent…

View on Digital Repository


Examiner variability in pattern evidence: proficiency, inconclusive tendency, and reporting styles

Type: Research Area(s): ,

Published: 2023 | By: Amanda Luby

The current approach to characterizing uncertainty in pattern evidence disciplines has focused on error rate studies, which provide aggregated error rates over many examiners and pieces of evidence. However, decisions are often not unanimous and error frequency is likely to…

View on Digital Repository


Statistical Interpretation and Reporting of Fingerprint Evidence: FRStat Introduction and Overview

Type: , Research Area(s): ,,

Published: 2023 | By: Jeff Salyards

The FRStat is a tool designed to help quantify the strength of fingerprint evidence. Following lengthy development and validation with assistance from CSAFE and NIST, in 2017 the FRStat was implemented at the USACIL. FRStat is now freely available and…

View on Digital Repository


A method for quantifying individual decision thresholds of latent print examiners

Type: Research Area(s): ,

Published: 2023 | By: Amanda Luby

In recent years, ‘black box’ studies in forensic science have emerged as the preferred way to provide information about the overall validity of forensic disciplines in practice. These studies provide aggregated error rates over many examiners and comparisons, but errors…

View on Digital Repository


How Minutiae Frequency is Perceived and Used by Fingerprint Analysts in the Evaluation of Fingerprint Evidence

Type: Research Area(s):

Published: 2023 | By: Adele Quigley-McBride

Analysts consider the appearance, placement, and number of features within a fingerprint pattern (called minutiae) that correspond when deciding whether two fingerprints originated from the same person. Little is known about the actual base rates for different minutiae. That said,…

View on Digital Repository


Analyzing spatial responses: A comparison of IRT- based approaches

Type: Research Area(s): ,

Published: 2023 | By: Amanda Luby

We investigate two approaches for analyzing spatial coordinate responses using models inspired by Item Response Theory (IRT). In the first, we use a two-stage approach to first construct a pseudoresponse matrix using the spatial information and then apply standard IRT…

View on Digital Repository


Perceptions of blind proficiency testing among latent print examiners

Type: Research Area(s): ,

Published: 2023 | By: Brett O. Gardner

In recent years, scholars have levied multiple criticisms against traditional proficiency testing procedures in forensic laboratories. Consequently, on several occasions, authorities have formally recommended that laboratories implement blind proficiency testing procedures. Implementation has been slow, but laboratory management has increasingly…


What types of information can and do latent print examiners review? A survey of practicing examiners

Type: Research Area(s): ,

Published: 2023 | By: Brett Gardner

Understanding typical work practices is important to understanding the decision-making process underlying latent print comparison and improving the reliability of the discipline. Despite efforts to standardize work practices, a growing literature has demonstrated that contextual effects can influence every aspect…

View on Digital Repository


Does image editing improve the quality of latent prints? An analysis of image-editing techniques in one crime laboratory

Type: Research Area(s):

Published: 2023 | By: Brett Gardner

Field research within latent print comparison has remained sparse in the context of an otherwise growing body of literature examining the discipline. Studies examining how ACE-V procedures are implemented within active crime laboratories are especially lacking in light of research…

View on Digital Repository


Modeling Covarying Responses in Complex Tasks

Type: Research Area(s):

Published: 2022 | By: Amanda Luby

In testing situations, participants are often asked for supplementary re- sponses in addition to the primary response of interest, which may in- clude quantities like confidence or reported difficulty. These additional responses can be incorporated into a psychometric model either…

View on Digital Repository


Analyzing spatial responses: A comparison of IRT- based approaches, Conference Presentation

Type: Research Area(s):

Published: 2022 | By: Amanda Luby

We investigate two approaches for analyzing spatial coordinate responses using models inspired by Item Response Theory (IRT). In the first, we use a two-stage approach to first construct a pseudoresponse matrix using the spatial information and then apply standard IRT…

View on Digital Repository


Characterizing Variability in Forensic Decision-Making with Item Response Theory

Type: Research Area(s):

Published: 2022 | By: Amanda Luby

This presentation is from the 2022 Joint Statistical Meetings

View on Digital Repository


Measuring Proficiency among Latent Print Examiners: A Statistical Approach from Standardized Testing

Type: Research Area(s): ,

Published: 2022 | By: Amanda Luby

This presentation is from the 74th Annual Scientific Conference of the American Academy of Forensic Sciences

View on Digital Repository


Does Image Editing Improve the Quality of Latent Prints? An Analysis of Image‐Enhancement Techniques in One Crime Laboratory

Type: Research Area(s): ,

Published: 2022 | By: Brett Gardner

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022

View on Digital Repository


What types of information can and do latent print examiners review? A survey of practicing examiners

Type: Research Area(s): ,

Published: 2022 | By: Brett Gardner

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022.

View on Digital Repository


Investigative Leads in Latent Prints: A Comparison of Laboratory Procedures

Type: Research Area(s): ,

Published: 2022 | By: Amanda Wilson

The following was presented at the 74th Annual Scientific Conference of the American Academy of Forensic Sciences (AAFS), Seattle, Washington, February 21-25, 2022.

View on Digital Repository


Characterizing verification and blind proficiency testing at forensic laboratories

Type: Research Area(s): ,

Published: 2022 | By: Maddisen Neuman

The 2014 Bureau of Justice survey of publicly funded forensic crime laboratories found that while 97% of the country’s 409 public forensic labs reported using some kind of proficiency testing, only 10% reported using blind tests (Burch et al, 2016).…

View on Digital Repository


How do latent print examiners perceive blind proficiency testing? A survey of practicing examiners

Type: Research Area(s): ,

Published: 2022 | By: Brett Gardner

The current study sought to explore perceptions of BPT among practicing latent print examiners and determine whether such beliefs varied between examiners who work for laboratories with and without BPT. Overall, opinions regarding the value of BPT to accurately assess…

View on Digital Repository


Latent Print Quality in Blind Proficiency Testing: Using Quality Metrics to Examine Laboratory Performance

Type: Research Area(s): ,

Published: 2021 | By: Brett Gardner

Presented at American Association of Forensic Sciences (AAFS) 2021

View on Digital Repository


Page 1 of 3

COMMUNITY CALL-TO-ACTION

Want to collaborate with CSAFE on a project. Contact us to share your idea.