Skip to content

Latent Print Analysis

Overarching GOALS

Fingerprints have been used as evidence for decades, and their probative value has been reaffirmed in countless legal decisions. They remain the most common form of pattern evidence analysis. CSAFE research focuses on improving methods of latent print analysis and examination through investigation of quality metrics and the impact of image quality on examiner conclusions, the role of proficiency testing and the influence of forensic processing in crime labs.

karen_kafadar_02_da_500x500

Karen Kafadar

Commonwealth Professor and Chair, Co-Director of CSAFE

University of Virginia

Dr. Robin Mejia

Robin Mejia

Director, Program in Statistics and Human Rights, Co-Director of CSAFE

Carnegie Mellon University

SharonKelley_web

Sharon Kelley

Assistant Professor

University of Virginia

GardnerBrett_web

Brett Gardner

Post Doctoral Researcher

University of Virginia

SimonColeHi_web

Simon Cole

Professor

University of California, Irvine

amandaluby

Amanda Luby

Assistant Professor

Swarthmore College

Additional Team Members

Keith Inman keith.inman@csueastbay.edu

Daniel Murrie
murrie@virginia.edu

Brandon Garrett
bgarrett@law.duke.edu

Robert Ramotowski (NIST) robert.ramotowski@nist.gov

Adele Quigley-McBride
adele.quigleymcbride@duke.edu

focus Areas

CSAFE aims to correlate project phase I metrics with “accuracy of call” based on actual practice. Past studies of LPE accuracy have involved one step of the ACE-V process, with LPEs who knew they were participating in a study: Compare two prints and decide if the sources are the same, different or inconclusive. In this study, we will run blind samples through actual lab processes, where the study participant is unaware that the print is a test case.

Latent print examiners (LPEs) recognize the connection between “accuracy of call” and number/quality of features used to assess the evidence. The first step of ACE-V requires a subjective “quality.” SWGFAST developed a “sufficiency chart” that showed contours for “poor,” “adequate,” and “very good” accuracy as a function of number of minutiae in a latent fingerprint image and their “quality.” This chart was not based on data: “quality” was not defined, and the contours for the regions were based on “expert opinion.”

Phase 1 of this project developed a new quality metric and implemented it, and two others, in computer code. The metrics are objective (same quality scores with same input) and have been calculated on prints from two sources of data (NIST and HFSC).

Phase 2 aims to correlate these metrics with “accuracy of call” based on actual practice. Past studies of LPE accuracy have involved one step of the ACE-V process, with LPEs who knew they were participating in a study: Compare two prints and decide if the sources are the same, different or inconclusive. In this study, we will run blind samples through actual lab processes, where the study participant is unaware that the print is a test case. The print may have a “hit” in the crime lab’s database, or it may not (where the correct conclusion should be “no hit”). To affect real-world studies, we start with the relatively large quality assurance program at HFSC, using their test blinds as well as prints from subjects that have no match in any database. 

This project will expand upon previous work through the analysis of existing data (including the FBI “Black Box” and “White Box” studies and additional proficiency test results) and the development of more complex IRT models suited to forensic tasks (e.g., incorporating the “verification” step of the ACE-V process for latent print examination). The project will culminate with the development and analysis of a pilot “performance” proficiency exam for latent print examiners that could be used to estimate error rates across different classes of comparisons as well as provide feedback to latent print examiners.

Fingerprints remain the most common form of pattern evidence, and proficiency tests play a key role in the qualification of latent print examiners. Although proficiency tests are widely used in forensic science for training and procedural purposes (Koehler, 2013; AAAS, 2017), they are not being utilized to their full potential, as differences in difficulty across proficiency tests are not taken into account (Luby and Kadane, 2018). Furthermore, while error rate studies rates (such as the FBI “Black Box” [Ulery et.al, 2011] and “White Box” [Ulery et.al, 2014] studies) provide valuable estimates of an overall error rate aggregated across examiners, individual examiner error rates do not adjust for different participants being randomly assigned to different subsets of prints, some of which may contain more difficult comparisons than others. It is thus impossible to understand the variability of individual examiner performance using the raw error rates alone.

Item Response Theory (IRT), which is used extensively in educational testing, is one approach that measures both participant proficiency and item difficulty. CSAFE researchers have been successful in initial application of straightforward IRT to proficiency tests for latent prints (Luby and Kadane, 2018) and the FBI “Black Box” Study (Luby, 2019b), including extensions that do not require conclusions to be scored (Luby, Mazumder & Junker 2020; Luby 2019a). This project will expand upon previous work through the analysis of existing data (including the FBI “Black Box” and “White Box” studies and additional proficiency test results) and the development of more complex IRT models suited to forensic tasks (e.g. incorporating the “verification” step of the ACE-V process for latent print examination). Results from these analyses will be published in the statistics and forensic science literature and may lead to additional insights about the cognitive processes involved in pattern recognition by latent print examiners, as well as the variability of these processes within and across examiners. Open-source software for performing IRT analyses will be developed for use by proficiency test providers, forensic laboratories performing “in-house” proficiency tests, and other researchers. The project will culminate with the development and analysis of a pilot “performance” proficiency exam for latent print examiners that could be used to estimate error rates across different classes of comparisons as well as provide feedback to latent print examiners.

This project will assess types of proficiency tests used in forensic laboratories; assess analyst perceptions of declared proficiency tests; assess barriers to implementation of blind proficiency tests and develop recommendations to address them; analyze the results of blind proficiency tests; and compare declared forensic proficiency testing programs to blind proficiency testing.

Proficiency testing of analysts is widely accepted as a core component of quality assurance at testing laboratories and is a requirement for accreditation of forensic laboratories. In forensic disciplines, there are two primary types of proficiency tests: declared tests, in which analysts know they are being tested, generally with tests from a commercial vendor that do not replicate a full case; and blind tests, in which test cases are submitted as part of a laboratory’s regular case flow and analysts do not know which case is a test. 

This project will assess types of proficiency tests used in forensic laboratories; assess analyst perceptions of declared proficiency tests; assess barriers to implementation of blind proficiency tests and develop recommendations to address them; analyze the results of blind proficiency tests; and compare declared forensic proficiency testing programs to blind proficiency testing. Blind testing is widely used in other fields, but adoption has been slow at forensic laboratories, in part because cultural norms have not promoted blind testing and also because it is far more logistically challenging to create blind tests for a wide range of forensic disciplines, than, for example, to create blind drug tests. This project will address challenges to increasing the use of blind testing in forensics, support the implementation of blind testing at state, regional, and local laboratories, and assess the results of these programs.

In this project, we will work with Collaborative Testing Service (CTS) to expand on studies initiated in the first phase of CSAFE, and will continue established collaborations with two laboratories, the Houston Forensic Science Center and the Allegheny County Office of the Medical Examiner. Additionally, we will build on new relationships that were initiated during in the first phase of CSAFE to increase the use of blind proficiency testing at two additional laboratories. We will document these experiences in CSAFE white papers covering lessons learned and best practices for laboratories of a range of sizes. We will evaluate the results of these programs to provide data of use to laboratories themselves and to better assess both the value of blind proficiency testing and the results regarding LPA as performed in practice, and we will work with laboratories to develop a collaborative infrastructure to support the implementation of blinding in mid-sized and possibly smaller laboratories that do not have the infrastructure of HFSC. Our goal is to work with labs to develop their own internal processes for blind proficiency testing and not have CSAFE simply perform a one-time external blind test.

The overarching goal of this project is to better understand, and ultimately improve, forensic processing in labs and evaluate the field reliability of latent print comparison procedures. We plan to continue this line of research by (1) providing descriptive examinations of laboratory procedures in at least three laboratories with contrasting policies, (2) examining the influence of contextual factors and case processing variables across different laboratories, (3) exploring the implementation of procedural changes where possible, such as the use of triage systems based on existing quality metrics (e.g., LQMetrics) or other indicators, and (4) exploring the financial and operational costs associated with procedural changes.

Although there has been an increasing amount of research on the validity and reliability of latent print comparisons (e.g., Ulery et al., 2011; 2012; Langenburg et al., 2012), the field has much less data regarding “field reliability,” or examiner agreement in routine, real-world practice. This research is particularly important given the growing bodies of studies indicating that latent print comparison can be influenced by contextual information (e.g., Langenburg, Champod, & Wertheim, 2009; Stevenage & Bennett, 2017).

Thus, the overarching goal of this project is to better understand, and ultimately improve, forensic processing in labs and evaluate the field reliability of latent print comparison procedures. We will expand our current work with the Houston Forensic Science Center to include at least two other laboratories that will allow for both intra- and inter-laboratory comparisons and a more informed evaluation of typical practice and case flow in crime laboratories. We plan to continue this line of research by (1) providing descriptive examinations of laboratory procedures in at least three laboratories with contrasting policies, (2) examining the influence of contextual factors and case processing variables across different laboratories, (3) exploring the implementation of procedural changes where possible, such as the use of triage systems based on existing quality metrics (e.g., LQMetrics) or other indicators, and (4) exploring the financial and operational costs associated with procedural changes. We will also work to design larger experimental studies in which multiple laboratories process identical prints and their conclusions can be compared to further the goal of better understanding the field reliability of latent print comparison and how different workflows affect outcomes. We hope this work will create an exemplar for using data to improve case processing and reliability, implementing the use of quantitative metrics (e.g., objective fingerprint quality metrics) to improve accuracy and reliability, and evaluating procedural changes — including the financial impact on labs.

The research will better inform the forensic community about how and when forensic science can do the opposite of what it is intended to: corroborate, rather than refute, investigators’ suspicion of an innocent person.

One of the primary motivators for the reform of forensic science is the desire to avoid wrongful convictions. Forensic science has long been identified as one of several key contributors to wrongful convictions. But less is known about how forensic science contributes to wrongful convictions.

The National Registry of Exonerations is the largest and most authoritative public data set about exonerations in the United States. It is headquartered at the University of California, Irvine, and it is a joint project of UCI, the University of Michigan and Michigan State University. The PI is director and associate editor. 

Exonerations are not coterminous with wrongful convictions, but exonerations are the best known proxy for studying wrongful convictions. False or misleading forensic evidence is the fourth leading cause of wrongful conviction in the Registry’s data set. Over 500 cases are currently coded for this cause.

The research will better inform the forensic community about how and when forensic science can do the opposite of what it is intended to: corroborate, rather than refute, investigators’ suspicion of an innocent person.

 

Knowledge transfer

  • Type

Found 23 Results
Page 1 of 2

Implementation of a Blind Quality Control Program in a Forensic Laboratory

Type: Research Area(s): ,

Published: 2019 | By: Callan Hund

A blind quality control (QC) program was successfully developed and implemented in the Toxicology, Seized Drugs, Firearms, Latent Prints (Processing and Comparison), Forensic Biology, and Multimedia (Digital and Audio/Video) sections at the Houston Forensic Science Center (HFSC). The program was…

View on Digital Repository


CSAFE 2020 All Hands Meeting

Type: Research Area(s): ,,,,,,,,

The 2020 All Hands Meeting was held May 12 and 13, 2020 and served as the closing to the last 5 years of CSAFE research and focused on kicking off new initiatives for the next phase of the center, CSAFE…


How do latent print examiners perceive proficiency testing? An analysis of examiner perceptions, performance, and print quality

Type: Research Area(s):

Published: 2020 | By: Sharon Kelley

Proficiency testing has the potential to serve several important purposes for crime laboratories and forensic science disciplines. Scholars and other stakeholders, however, have criticized standard proficiency testing procedures since their implementation in laboratories across the United States. Specifically, many experts…

View on Digital Repository


Crime Lab Proficiency Testing and Quality Management

Type: Research Area(s):

In the wake of recent reports documenting the vulnerability of forensic science methodologies to human error (e.g., NAS, 2009; PCAST, 2016), the field has sometimes pointed to proficiency testing as evidence of disciplines’ validity and/or reliability.  However, current proficiency procedures…


Processing Stamp Bags for Latent Prints: Impacts of Rubric Selection and Gray-Scaling on Experimental Results

Type: Research Area(s):

Published: 2019 | By: B. Barnes

We report data on two open issues in our previous experimentation seeking an effective method for development of latent prints on glassine drug bags: (1) the choice of rubric to assess the quality of fingerprints and (2) the choice of…

View on Digital Repository


Latent Print Proficiency Testing: An Examination of Test Respondents, Test-Taking Procedures, and Test Characteristics.

Type: Research Area(s):

Published: 2019 | By: Brett O. Gardner

Proficiency testing is a key component of quality assurance programs within crime laboratories and can help improve laboratory practices. However, current proficiency testing procedures contain significant limitations and can be misinterpreted by examiners and court personnel (Garrett & Mitchell, 2018).…

View on Digital Repository


Accounting for individual differences among decision-makers with applications to the evaluation of forensic evidence

Type: Research Area(s):

Published: 2019 | By: Amanda Luby

Forensic science often involves the comparison of crime-scene evidence to a known-source sample to determine if the evidence arose from the same source as the reference sample. Common examples include determining if a fingerprint or DNA was left by a…

View on Digital Repository


Pattern Evidence Research in CSAFE-An Update

Type: Research Area(s): ,,,,

CSAFE is a NIST Center of Excellence in Forensic Science. A large portion of CSAFE’s research portfolio is on what is known as pattern evidence, which encompasses any evidence that can be represented as an image. Examples of pattern evidence…


Implementing Blind Proficiency Testing

Type: Research Area(s):

This CSAFE Center Wide webinar was presented on July 18, 2019 by Dr. Robin Mejia, CSAFE researcher at Carnegie Mellon University. Dr. Mejia provided presentation slides. Presentation Description: Blind proficiency testing is a norm or requirement in many scientific fields.…


The Role of Statistics in Forensic Science

Type: Research Area(s):

Published: 2019 | By: Karen Kafadar

This presentation examines statistical methods in science, statistical success in forensic science as seen in interpreting DNA evidence and statistics in forensic science post-facto. It also discusses where statistics can be used in forensic science such as trace & pattern…

View on Digital Repository


How do latent print examiners perceive proficiency testing? An analysis of examiner perceptions, performance, and print quality

Type: Research Area(s):

Published: 2019 | By: Sharon Kelley

The goal of this presentation is to educate attendees on how latent print examiners view current proficiency testing items and how such views relate to more objective measures of proficiency tests, such as print quality metrics and examiner test performance.

View on Digital Repository


Certainty and Uncertainty in Reporting Fingerprint Evidence

Type: Research Area(s):

Published: 2018 | By: Joseph Kadane

Everyone knows that fingerprint evidence can be extremely incriminating. What is less clear is whether the way that a fingerprint examiner describes that evidence influences the weight lay jurors assign to it. This essay describes an experiment testing how lay…

View on Digital Repository


A Discouraging Omen: A Critical Evaluation of the Approved Uniform Language for Testimony and Reports for the Forensic Latent Print Discipline

Type: Research Area(s): ,

Published: 2018 | By: Simon Cole

The theme of the 2018 Georgia State University Law Review symposium is the Future of Forensic Science Reform. In this Article, I will assess the prospects for reform through a critical evaluation of a document published in February 2018 by…

View on Digital Repository


Resolving latent conflict: What happens when latent print examiners enter the cage?

Type: Research Area(s): ,

Published: 2018 | By: Alicia Rairden

Latent print examination traditionally follows the ACE-V process, in which latent prints are first analyzed to determine whether they are suitable for comparison, and then compared to an exemplar and evaluated for similarities and differences. Despite standard operating procedures and…

View on Digital Repository


Proficiency Testing of Forensic Fingerprint Examiners with Bayesian Item Response Theory

Type: Research Area(s):

Published: 2018 | By: Amanda S. Luby

In recent years, the forensic community has pushed to increase the scientific basis of forensic evidence, which has included proficiency testing for fingerprint analysts. We used proficiency testing data collected by Collaborative Testing Services in which 431 fingerprint analysts were…

View on Digital Repository


Fingerprint Science

Type: Research Area(s):

Published: 2018 | By: Joseph B. Kadane

This paper examines the extent to which data support the source attributions made by fingerprint examiners. It challenges the assumption that each person’s fingerprints are unique, but finds that evidence of persistence of an individual’s fingerprints is better founded. The…

View on Digital Repository


Comparing Categorical and Probabilistic Fingerprint Evidence

Type: Research Area(s): ,

Published: 2018 | By: Brandon Garrett

Fingerprint examiners traditionally express conclusions in categorical terms, opining that impressions do or do not originate from the same source. Recently, probabilistic conclusions have been proposed, with examiners estimating the probability of a match between recovered and known prints. This…

View on Digital Repository


Statistical modeling and analysis of trace element concentrations in forensic glass evidence

Type: Research Area(s):

Published: 2018 | By: Karen D.H. Pan

The question of the validity of procedures used to analyze forensic evidence was raised many years ago by Stephen Fienberg, most notably when he chaired the National Academy of Sciences’ Committee that issued the report The Polygraph and Lie Detection…

View on Digital Repository


The Critical Role of Statistics in Demonstrating the Reliability of Expert Evidence

Type: Research Area(s):

Published: 2018 | By: Karen Kafadar

Federal Rule of Evidence 702, which covers testimony by expert witnesses, allows a witness to testify “in the form of an opinion or otherwise” if “the testimony is based on sufficient facts or data” and “is the product of reliable…

View on Digital Repository


Latent Print Processing of Glassine Stamp Bags Containing Suspected Heroin: The Search for an Efficient and Safe Method

Type: Research Area(s):

Published: 2018 | By: Brittany Barnes

A three-part study was designed to find the safest and most efficient method of processing glassine stamp bags containing suspected heroin while preserving the qualitative properties of the substance. Gravimetric analysis was also conducted to determine whether selected processing methods…

View on Digital Repository


Page 1 of 2

COMMUNITY CALL-TO-ACTION

Want to collaborate with CSAFE on a project. Contact us to share your idea.

Do you have 44.03 seconds?

44.3 Seconds. That is the average amount of time it takes for a visitor to provide site feedback.
Test it yourself by taking the survey.


    A scientist/researcherA member of the forensic science communityA journalist/publicationA studentOther. Please indicate.


    Learn more about CSAFE overall.Discover research CSAFE is undertaking.Explore collaboration opportunities.Find tools and education opportunities.Other. Please indicate.


    YesNo