A Study on Improving Forensic Decision Making will be the Topic of CSAFE’s February Webinar

Figure 2 from the study shows sources of cognitive bias in sampling, observations, testing strategies, analysis and/or conclusions, that impact even experts. These sources of bias are organized in a taxonomy of three categories: case-specific sources (Category A), individual-specific sources (Category B) and sources that relate to human nature (Category C).

A new study that proposes a broad and versatile approach to strengthening expert decision making will be the focus of an upcoming Center for Statistics and Applications in Forensic Evidence (CSAFE) webinar.

The webinar, Improving Forensic Decision Making: A Human-Cognitive Perspective, will be held Thursday, Feb. 17 from 12–1 p.m. CST. It is free and open to the public.

Itiel Dror
Itiel Dror

During the webinar, Itiel Dror, a cognitive neuroscience researcher from the University College London, will discuss his journal article, Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and bias. The article was published in Forensic Science International: Synergy and co-authored by Jeff Kukucka, associate professor of psychology at Towson University.

In the article, the authors introduce Linear Sequential Unmasking (LSU-E), an approach that can be applied to all forensic decisions, and also reduces noise and improves decisions “by cognitively optimizing the sequence of information in a way that maximizes information utility and thereby produces better and more reliable decisions.”

From the Abstract:

In this paper, we draw upon classic cognitive and psychological research on factors that influence and underpin expert decision making to propose a broad and versatile approach to strengthening expert decision making. Experts from all domains should first form an initial impression based solely on the raw data/evidence, devoid of any reference material or context, even if relevant. Only thereafter can they consider what other information they should receive and in what order based on its objectivity, relevance, and biasing power. It is furthermore essential to transparently document the impact and role of the various pieces of information on the decision making process. As a result of using LSU-E, decisions will not only be more transparent and less noisy, but it will also make sure that the contributions of different pieces of information are justified by, and proportional to, their strength.

To register for the February webinar, visit https://forensicstats.org/events/.

The CSAFE Spring 2022 Webinar Series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.

Insights: Handwriting Identification Using Random Forests and Score-Based Likelihood Ratios

INSIGHTs

Handwriting Identification Using Random Forests and Score-Based Likelihood Ratios

OVERVIEW

Handwriting analysis has long been a largely subjective field of study, relying on visual inspections from trained examiners to determine if questioned documents come from the same source. In recent years, however, efforts have been made to develop methods and software which quantify the similarity between writing samples more objectively. Researchers funded by CSAFE developed and tested a new statistical method for handwriting recognition, using a score-based likelihood
ratio (SLR) system to determine the evidential value.

Lead Researchers

Madeline Quinn Johnson
Danica M. Ommen

Journal

Statistical Analysis and Data Mining

Publication Date

03 December 2021

Publication Number

IN 124 HW

The Goals

1

Apply the SLR system to various handwritten documents.

2

Evaluate the system’s performance with various approaches to the data.

The Study

CSAFE collected handwriting samples from 90 participants, using prompts of various lengths to get samples of different sizes. These writing samples were broken down into graphs, or writing segments with nodes and connecting edges, then grouped into clusters for comparison.

When comparing the gathered samples, Johnson and Ommen considered two possible scenarios:

Common Source Scenario:

two questioned documents with unknown writers are compared to determine whether they come from the same source.

Specific Source Scenario:

a questioned document is compared to a prepared sample from a known writer.

They then used Score-based Likelihood Ratios (SLRs) to approximate the weight of the evidence in both types of scenarios.

The researchers used three different approaches when generating the known non-matching comparisons for the specific source SLRs:

Trace-Anchored Approach:

only uses comparisons between the questioned document (the trace) and a collection of writers different from the specific source (the background population).

Source-Anchored Approach:

only uses comparisons between writing from the specific source and the background population.

General-Match Approach:

only uses comparisons between samples from different writers in the background population.

Once the SLRs for each scenario were generated, they used random forest algorithms to determine comparison scores, including a pre-trained random forest using all of the gathered data, and one trained according to the relevant SLR.

Results

1

In common source scenarios, the trained random forest performed well with longer writing samples, but struggled with shorter ones.

2

The specific source SLRs performed better than the common source SLRs because they are tailored to the case at hand.

3

In all scenarios, it was more difficult for the SLR system to confirm samples with the same source than with different sources.

FOCUS ON THE FUTURE

 

The SLRs do not perform well with short documents, possibly due to a mismatch between the number of clusters used and the length of the document. Future work could determine the optimal number of clusters based on the document’s length.

Because the SLRs provide data on the strength of forensic handwriting evidence for an open-set of sources, this approach is an improvement on the previous clustering method developed by CSAFE, which used a closed set of known sources.