Skip to content

Statistics

Overarching GOALS

CSAFE is committed to leveraging statistical methods developed in one field of application for use in forensic science, as appropriate. Through research methods, CSAFE professionals are assessing reliability of categorical conclusions, investigations of properties of machine learning algorithms, and studies of score-based likelihood ratios to inform multiple domains.

Hal Stern

Hal S. Stern

Provost and Executive Vice Chancellor and Chancellor's Professor, Co-Director of CSAFE

University of California, Irvine

DanicaOmmen_web

Danica Ommen

Associate Professor

Iowa State University

Additional Team Members

Naomi Kaplan-Damry nkapland@uci.edu

Alicia Carriquiry
alicia@iastate.edu

Heike Hofmann hofmann@iastate.edu

Steve Lund (NIST)
steven.lund@nist.gov

focus Areas

CSAFE researchers are using traditional logistic models to study the performance characteristics of individual examiners and individual examples, as well as aggregate performance characteristics for the population. We are aiming to learn about the efficiency of individual examiners and about the population of examiners.

In many forensic science disciplines, especially those involving pattern comparisons, the most common approach to analysis of the evidence involves a series of binary or categorical decisions regarding the evidence. For example, in latent print analysis an examiner initially decides about whether the latent print has enough information to make a formal identification, or not enough value (i.e., there is not enough information to perform the comparison). Following this, assuming the print is of value, the examiner will reach a final decision that is again expressed in categorical terms (e.g., identification, inconclusive, exclusion). There is currently considerable discussion about the role of likelihood ratios in the analysis of forensic evidence. The ENFSI guidelines endorse this approach. Ongoing discussion about the next steps in forensic pattern evidence analysis in the United States however suggests maintaining the focus on categorical outcomes, with perhaps more potential outcomes allowed (a 5-point or larger scale). To date evaluations of forensic examiners have focused primarily on binary decisions (did they correctly identify a pair of known matching items?). There is a need for developing statistical approaches to reliability and validity studies using categorical scales.  

The presumed setup for this research project is that data has been collected from a number of forensic science examiners on a number of cases or examples. For each examiner–example pair we have the outcome of the analysis (e.g., determination of value, conclusion with respect to source) on a categorical scale. There may also be data available about characteristics of the examiners and about characteristics of the examples. As a starting point for the research we will consider analyses treating each category as a binary response. This would, for example, in the latent print case, correspond to studying the probability of a VID (value for identification) decision (yes/no) and assessing variation in the decision-making process across examiners and examples. This can be done with traditional logistic models or with the closely related item response theory models used in educational testing. Using such models allows one to obtain information about the performance characteristics of individual examiners (and individual examples) as well as aggregate performance characteristics for the population. The next stage of the analysis will consider generalizations of these models to handle the multiple-category variables.  This will focus on multinomial models, including those developed by considering underlying latent continuous variables. The aim of these models, like those described above, is to learn about the efficiency of individual examiners and about the population of examiners.

The primary goals of the proposed project are to (1) explore the strengths and weaknesses of score-based likelihood ratios (SLRs) for quantifying the value of evidence from a statistical perspective, (2) explore the strengths and weaknesses of SLRs from the perspective of forensic evidence interpretation, and (3) determine whether it is possible to develop a framework of evidence interpretation which exploits the strengths of SLRs for impression and pattern evidence. This project would greatly benefit the forensic science community by providing those who wish to use SLRs with a list of recognized strengths and weaknesses, with supporting reasons, as well as a framework for expressing conclusions regarding the SLR results.

Score-based likelihood ratios (SLRs) are becoming increasingly popular for analyzing impression and pattern evidence due to the inherent difficulties in computing Bayes Factors. Some researchers have argued against the use of SLRs within a Bayesian decision paradigm for philosophical reasons, often citing a lack of coherence. Additionally, these researchers might argue that SLRs don’t actually approximate a Bayes Factor, and worse still, there is no indication of how far an SLR may be from the corresponding Bayes Factor. Other researchers have argued that there is no issue with using score-based likelihood ratios in a Bayesian decision paradigm as long as that SLR is accompanied by a measure of calibration of the SLR system. Regardless of which viewpoint one takes, the fact remains that very little research has been published on whether or not SLRs have any validity for quantifying the value of forensic evidence. The primary goals of the proposed project are to (1) explore the strengths and weaknesses of SLRs for quantifying the value of evidence from a statistical perspective, (2) explore the strengths and weaknesses of SLRs from the perspective of forensic evidence interpretation, and (3) determine whether it is possible to develop a framework of evidence interpretation which exploits the strengths of SLRs for impression and pattern evidence. Many forensic science researchers and practitioners have a strong desire for quantitative results for impression and pattern evidence to bolster their “subjective” opinions. This project would greatly benefit the forensic science community by providing those who wish to use SLRs with a list of recognized strengths and weaknesses, with supporting reasons, as well as a framework for expressing conclusions regarding the SLR results.

The primary goals of this project are to (1) explore the extent to which violating the assumption of independence affects the performance of the scoring methods and (2) develop machine learning methods for evaluating comparison scores for forensic evidence which can accommodate and/or adjust for the dependency in the data. The proposed research will impact the community by providing more statistically rigorous methods of computing score-based likelihood ratios for impression and pattern evidence.

Pattern and impression evidence results in data that is inherently high-dimensional and difficult to model statistically. Therefore, many researchers have focused on methods of measuring the similarity between two objects instead. This comparison results in a low-dimensional score which is much easier to model. CSAFE researchers have relied on statistical machine learning algorithms to compute the scores. One of the difficulties with these methods is that the pairwise comparison of all the evidential objects results in a set of dependent scores. This is because any of the scores that contain the same object as one of the two in the comparison will be dependent. The difficulty lies in the fact that while machine learning methods do not have any distributional assumptions, most assume independence between the observations in the data. The primary goals of this project are to (1) explore the extent to which violating the assumption of independence affects the performance of the scoring methods and (2) develop machine learning methods for evaluating comparison scores for forensic evidence that can accommodate and/or adjust for the dependency in the data. The proposed research will impact the community by providing more statistically rigorous methods of computing score-based likelihood ratios for impression and pattern evidence. This project builds on the work achieved during the first five years in Project CC, “Statistical and Algorithmic Approaches to Matching Bullets” and in Project EE, “Statistical and Algorithmic Approaches to Shoeprint Analysis,” by critically evaluating the current methods for violations of assumptions and potential areas for correction and improvement before the current methods are deployed in crime labs.

Knowledge Transfer

  • Type

Found 82 Results
Page 5 of 5

Thinking About Likelihood Ratios for Pattern Evidence

Type: Research Area(s):

This CSAFE Center Wide Meeting webinar was presented by Hal Stern from University of California on January 19, 2017. Description: The likelihood ratio has been proposed as a logical way to summarize forensic evidence. In pattern evidence disciplines; however, the…


Introduction to Statistical Thinking for Forensic Practitioners Presentation

Type: Research Area(s):

All forensic practitioners, laboratory staff and crime lab guests are invited to take a mini-course on statistical thinking for forensic practitioners from CSAFE experts. West Palm Beach | May 5, 2016


Page 5 of 5

COMMUNITY CALL-TO-ACTION

Want to collaborate with CSAFE on a project. Contact us to share your idea.