Insights: Surveying Practicing Firearms Examiners

INSIGHTS

Surveying Practicing Firearms Examiners

OVERVIEW

In recent years, there has been much discussion and debate regarding firearm examiner testimony, but little is known empirically about the actual practices of firearm examiners in conducting casework. Researchers funded by CSAFE conducted a survey of practicing firearms examiners to better understand the conditions of their casework, as well as their views of the field and its relations with other parts of the justice system.

Lead Researchers

Nicholas Scurich
Brandon L. Garrett
Robert M. Thompson

Journal

Forensic Science International: Synergy

Publication Date

2022

Publication Number

IN 127 IMPL

Goals

1

Gain insights into the day-to-day casework and lab procedures of firearms examiners

2

Find what examiners believe impacts the quality of their work

3

Learn the examiners’ opinions of statistical models, new technology, and misunderstandings from judges and jurors in regard to their profession

The Study

Scurich et al. posted a survey on the Association of Firearms and Toolmark Examiners (AFTE) member forum from July to November of 2020. Participation was voluntary, anonymous, and uncompensated. A total of 79 AFTE members provided usable responses.

The survey asked about several topics, including:

Results

Frequency

What percentage of your cases result in an identification conlusion?

Frequency

What percentage of your cases result in an exclusion/elimination conlusion?

The percentage of identification results vs. elimination results

From the 79 responses, Scurich et al. learned the following:

Focus on the future

 

Further work should explore the impacts of lab policies and evidence submission practices on examiners.

New training and educational opportunities—for both firearm examiners and the consumers of firearm examiner testimony—could provide benefits and promote clearer understanding of the strengths and limitations of firearm examination.

Insights: The Effect of Image Descriptors on the Performance of Classifiers of Footwear Outsole Image Pairs

INSIGHT

The Effect of Image Descriptors on the Performance of Classifiers of Footwear Outsole Image Pairs

OVERVIEW

Shoe prints left at a crime scene can often be partially observed, smudgy, or subject to background effects such as dirt or snow, which can make comparing prints to a reference image challenging. Similarly, prints from the same shoe can vary depending on the wearer’s gait, weight and activity during the time of impression. Reliable, qualitative methods have yet to be developed for visually assessing the similarity between impressions. To help develop such methods, researchers funded by CSAFE created an algorithm that extracts image descriptors (well-defined groups of pixels), then tested the algorithm by comparing simulated crime scene images to a study database.

Lead Researchers

Soyoung Park 
Alicia Carriquiry

Journal

Forensic Science International

Publication Date

February 2022

Publication Number

IN 128 FW

The Goals

1

Develop a quantitative method for comparing shoe print images.

2

Test this method’s performance against an existing “standard” method to quantify similarity between two images.

The Study

Park and Carriquiry created a study database of impression images, using 48 pairs of shoes which had been worn by volunteers for six months. They then scanned the shoe prints, placing 0 to 10 sheets of paper between the shoes and the scanner to simulate levels of degradation. In all, the researchers obtained 864 reference images, and made 1,728 pairs of images to compare
half of which were mated (coming from the same shoe), and half non-mated.

Meanwhile, the researchers developed an algorithm to compare these pairs using image descriptors, which identify distinct groups of pixels in an image such as corners, lines and blobs. In particular, they used the SURF and KAZE descriptors to identify blobs, and the ORB descriptor to identify corners.

A mated pair of images, scanned at level 0 and level 10 degradation

Using six different combinations of descriptors, the researchers ran their comparisons to determine which model had the best balance of accuracy and computation efficiency, which is required in real-world situations. For a control, they used a proposed method called Phase-Only
Correlation (POC) to compare to the descriptor-based methods.

SURF (Speeded-Up Robust Feature): a descriptor which uses a box filter on integral images

KAZE: meaning “wind” in Japanese, the name refers to the descriptor’s use of nonlinear diffusion filtering

ORB (Oriented FAST and Rotated BRIEF): a combination of two extraction methods, FAST (Features from Accelerated Segment Test) and BRIEF (Binary Robust Independent Elementary Features)

Results

Degradation Level 10

1

All tested models showed promise, with good quality images reaching accuracy of 95% or better, and even blurry images achieving accuracy of 85% to 88%.

2

The models that relied on the SURF and KAZE descriptors outperformed those that relied on ORB.

3

In comparison, the POC model failed to differentiate between mated and non-mated pairs.

Focus on the future

 

There is a lack of large databases with realistic footwear impressions. A larger database, with different brands and models of shoes, may help develop more robust algorithms for wider use.

Algorithms will likely never replace well-trained examiners, but the more accurate and efficient these algorithms become, the more useful they can be to examiners in their work.

Insights: Score-Based Likelihood Ratios for Camera Device Identification

INSIGHTS

Score-Based Likelihood Ratios for Camera Device Identification

OVERVIEW

In the developing field of digital image forensics, it is important to be able to identify cameras and other digital devices involved in crimes. However, current camera identification methods fail to quantify the strength of evidence, making it challenging for such evidence to withstand scrutiny in courts. Researchers funded by CSAFE propose using Score-Based Likelihood Ratios to quantify the weight of evidence in digital camera identification.

Lead Researchers

Stephanie Reinders, PhD
Yong Guan, PhD
Danica Ommen, PhD
Jennifer Newman, PhD

Journal

Journal of Forensic Sciences

Publication Date

6 February 2022

Publication Number

IN 126 STAT

Goals

1

Create Score-Based Likelihood Ratios (SLRs) to evaluate the strength of camera identification evidence

2

Compare different SLR models to determine which is the most accurate

The Study

All cameras have small manufacturing imperfections that cause slight variations among pixels in the camera sensor array. These imperfections are known as Photo-Response Non-Uniformities (PRNUs), which create a sort of “camera fingerprint” on images taken with that camera. These PRNUs can be used to identify the device used for a questioned image.

Reinders et al. used a dataset of 4,800 images from a total of 48 known camera devices. They then calculated a similarity score (notated as Δ) between questioned images (Q) and the PRNUs (K) of each camera.

From this, they constructed three different SLRs, each meant to determine the likelihood that a questioned image Q and Person of Interest’s camera’s PRNU K came from the same camera (hypothesis Hp), compared to the likelihood that Q and K came from different cameras (hypothesis Hd).

The three constructed SLR equations

Trace-Anchored SLR: Considers similarity scores between a questioned sample of evidence and samples from the alternative population

Source-Anchored SLR: Considers similarity scores between samples from a specific known source and samples from the alternative population

General Match SLR: Considers similarity scores between samples from randomly selected sources

RESULTS

Focus on the future

 

The data used in this study was a closed set, where all images came from the same known 26 devices, and were RAW, center-cropped, auto-exposure, and landscape orientation. Future studies may include an open set, with a larger variety of devices and image types, which may yield different results.

Several researchers have employed an “Inconclusive Zone” that does not result in a definitive match or non-match. This could be included in future studies, and if used in courts, could put further burden of proof on the prosecution and greater benefit of the doubt for the defense.

Insights: A Practical Tool for Information Management in Forensic Decisions

INSIGHTS

A Practical Tool for Information Management in Forensic Decisions:

Using Linear Sequential Unmasking-Expanded (LSU-E) in Casework

OVERVIEW

While forensic analysts strive to make their findings as accurate and objective as possible, they are often subject to external and internal factors that might bias their decision making. Researchers funded by CSAFE created a practical tool that laboratories can use to implement Linear Sequential Unmasking-Expanded (LSU-E; Dror & Kukucka, 2021)—an information management framework that analysts can use to guide their evaluation of the information available to them. LSU-E can improve decision quality and reduce bias but, until now, laboratories and analysts have
received little concrete guidance to aid implementation efforts.

Lead Researchers

Quigley-McBride, A.
Dror, I.E.
Roy, T.
Garrett, B.L.
Kukucka, J.

Journal

Forensic Science International: Synergy

Publication Date

17 January 2022

Publication Number

IN 125 IMPL

Goals

1

Identify factors that can bias decision-making.

2

Describe how LSU-E can improve forensic decision processes and conclusions.

3

Present a practical worksheet, as well as examples and training materials, to help laboratories incorporate LSU-E into their casework.

TYPES OF COGNITIVE BIAS

Cognitive biases can emerge from a variety of sources, including:

Figure 1. Eight sources of cognitive bias in forensic science (Dror, 2020)

COGNITIVE BIAS IN FORENSIC SCIENCE

As shown in Figure 1, there are many potential sources of information that can influence analysts’ decisions. Of particular concern is suggestive, task-irrelevant contextual information (such as a suspect’s race, sex, or prior criminal record) that can bias analyst’s decisions in inappropriate ways.

In one famous example, FBI latent print analysts concluded with “100 percent certainty” that a print linked to the 2003 Madrid train bombing belonged to a US lawyer, Brandon Mayfield. It transpired that these analysts were all wrong—that was not Mayfield’s print. Mayfield was Muslim, which might have biased the analysts given the strong, widespread attitudes towards Muslims post 9/11. Also, Mayfield was on the FBI’s “watch list” because he provided legal representation to someone accused of terrorist activities. Combined, these facts led to confirmation bias effects in the analysts’ evaluations and conclusions about Mayfield’s fingerprints.

LSU-E AND INFORMATION
MANAGEMENT

LSU-E is an approach information management which prioritizes case information based on three main criteria:

Biasing power:

How strongly the information might dispose an analyst to a particular conclusion.

Objectivity:

The extent to which the information might be interpreted to have different “meanings” from one analyst to another.

Relevance:

the degree to which the information is essential to the analytic task itself.

IMPLEMENTING LSU-E IN
FORENSICS

Quigley-McBride et al. have created a practical worksheet for laboratories to use when assessing new information.

1

First, the user specifies the information in question and its source

2

Second, they consider the three LSU-E criteria, and rate the information on a scale of 1-5 for each criterion

3

Finally, they describe strategies to minimize any adverse effects the information may have on the decision-making process

Focus on the future

 

Ideally, LSU-E procedures would be applied before the information reaches the analyst. That said, it is still effective when used at any point in the analyst’s workflow and can help analysts become aware of information that can inappropriately influence their work.

In addition to benefits for analysts, implementing LSU-E could help jurors evaluate the reliability of forensic expert testimony. This would not only encourage healthy skepticism among jurors, but could bolster an expert’s credibility by providing documentation of methods used to evaluate and mitigate potential biases in their decisions.

Insights: Handwriting Identification Using Random Forests and Score-Based Likelihood Ratios

INSIGHTs

Handwriting Identification Using Random Forests and Score-Based Likelihood Ratios

OVERVIEW

Handwriting analysis has long been a largely subjective field of study, relying on visual inspections from trained examiners to determine if questioned documents come from the same source. In recent years, however, efforts have been made to develop methods and software which quantify the similarity between writing samples more objectively. Researchers funded by CSAFE developed and tested a new statistical method for handwriting recognition, using a score-based likelihood
ratio (SLR) system to determine the evidential value.

Lead Researchers

Madeline Quinn Johnson
Danica M. Ommen

Journal

Statistical Analysis and Data Mining

Publication Date

03 December 2021

Publication Number

IN 124 HW

The Goals

1

Apply the SLR system to various handwritten documents.

2

Evaluate the system’s performance with various approaches to the data.

The Study

CSAFE collected handwriting samples from 90 participants, using prompts of various lengths to get samples of different sizes. These writing samples were broken down into graphs, or writing segments with nodes and connecting edges, then grouped into clusters for comparison.

When comparing the gathered samples, Johnson and Ommen considered two possible scenarios:

Common Source Scenario:

two questioned documents with unknown writers are compared to determine whether they come from the same source.

Specific Source Scenario:

a questioned document is compared to a prepared sample from a known writer.

They then used Score-based Likelihood Ratios (SLRs) to approximate the weight of the evidence in both types of scenarios.

The researchers used three different approaches when generating the known non-matching comparisons for the specific source SLRs:

Trace-Anchored Approach:

only uses comparisons between the questioned document (the trace) and a collection of writers different from the specific source (the background population).

Source-Anchored Approach:

only uses comparisons between writing from the specific source and the background population.

General-Match Approach:

only uses comparisons between samples from different writers in the background population.

Once the SLRs for each scenario were generated, they used random forest algorithms to determine comparison scores, including a pre-trained random forest using all of the gathered data, and one trained according to the relevant SLR.

Results

1

In common source scenarios, the trained random forest performed well with longer writing samples, but struggled with shorter ones.

2

The specific source SLRs performed better than the common source SLRs because they are tailored to the case at hand.

3

In all scenarios, it was more difficult for the SLR system to confirm samples with the same source than with different sources.

FOCUS ON THE FUTURE

 

The SLRs do not perform well with short documents, possibly due to a mismatch between the number of clusters used and the length of the document. Future work could determine the optimal number of clusters based on the document’s length.

Because the SLRs provide data on the strength of forensic handwriting evidence for an open-set of sources, this approach is an improvement on the previous clustering method developed by CSAFE, which used a closed set of known sources.

Insights: Using the Likelihood Ratio in Bloodstain Pattern Analysis

INSIGHTS

Using the Likelihood Ratio in Bloodstain Pattern Analysis

OVERVIEW

Using likelihood ratios (LRs) when reporting forensic evidence in court has significant advantages, as it allows forensic practitioners to consider their findings from the perspective of both the defense and the prosecution. However, despite many organizations adapting or recommending this practice, most experts in the field of bloodstain pattern analysis (BPA) still use a more traditional, subjective approach, indicating whether their findings are “consistent with” stated allegations. Researchers funded by CSAFE explored the challenges that come with using LRs when reporting BPA evidence, and proposed possible solutions to meet these challenges, concluding that the LR framework is applicable to BPA, but that it is a complex task.

Lead Researchers

Daniel Attinger
Kris De Brabanter
Christophe Champod

Journal

Journal of Forensic Sciences

Publication Date

29 October 2021

Publication Number

IN 123 BPA

Goals

1

Determine why many BPA experts do not use LRs in their reporting

2

Present directions the community could take to facilitate the use of LRs

3

Provide an example of how LRs are applied in a relevant field

CHALLENGES
OF USING LIKELIHOOD RATIOS

Likelihood ratios (LRs) compare two competing hypotheses to see which better fits the evidence. While this practice has several advantages for use in court, as it provides a more objective and transparent view of an expert’s findings, there are challenges when it comes to applying LRs to bloodstain pattern analysis.

Graph displaying factors that can affect the complexity of BPA

Attinger et al. identified two key factors affecting a likelihood ratio’s complexity:

This is further complicated by the nature of bloodstain pattern analysis itself. BPA focuses on questions of activity (how far, how long ago, in what direction the blood traveled) or the type of activity (what caused the blood pattern), rather than questions of source as is normal for most forensic LR models. In addition, BPA as a field consists of a wide range of methods, and is a complex science that is still being built.

EXAMPLE OF LIKELIHOOD
RATIOS IN ACTION

A recent study demonstrated how LRs could be used in BPA by applying them to the related field of fluid dynamics. In their test, they compared the time between the drying of a blood pool in a laboratory setting and one observed in photographs.

Using this model, they were able to create a physical model factoring in time, the scale and shape of the blood pool, and the surface on which the pool formed. This model could then be applied into a likelihood ratio, comparing propositions from the prosecution and defense.

In this instance, the defense’s proposition would be 2330 times more likely than the prosecution’s.

Focus on the future

Attinger et al. propose three directions to facilitate the use of LRs in the field of BPA:

 

Promote education and research to better understand the physics of fluid dynamics and how they relate to BPA

Create public databases of BPA patterns, and promote a culture of data sharing and peer review

Develop BPA training material that discusses LRs and their foundations

Insights: Latent Print Quality in Blind Proficiency Testing

INSIGHT

Latent Print Quality in Blind Proficiency Testing:

Using Quality Metrics to Examine Laboratory Performance

OVERVIEW

Organizations and leaders continuously call for blind proficiency testing in modern forensic labs because it more accurately simulates routine examiner casework. In response, researchers funded by CSAFE worked with the Houston Forensic Science Center to assess the results of their blind quality control program and then applied quality metrics to the test materials to see if the quality of the prints impacted their conclusions.

Lead Researchers

Brett O. Gardner
Maddisen Neuman
Sharon Kelley

Journal

Forensic Science International

Publication Date

May 7, 2021

Publication Number

IN 122 LP

THE GOALS

1

Examine the results of blind proficiency testing within a fingerprint examination unit of a crime laboratory.

2

Use available quality metrics software to measure the quality of the submitted prints.

3

See if there is an association between fingerprint quality and examiners’ conclusions.

The Studies

The Quality Division at the Houston Forensic Science Center inserted 376 prints into 144 blind test cases over a two-year period. In these cases, examiners determined if the prints were of sufficient quality to search in their Automated Fingerprint Identification System (AFIS). After searching AFIS for corresponding prints, they concluded whether or not the test prints were similar enough to make a Primary AFIS Association (PAA). Then, the Blind Quality Control (BQC) team judged the examiners’ accuracy.

 

Meanwhile, Gardner et al. entered the same test prints into LQMetrics –– a commonly used software tool for fingerprint examiners that rates the quality of a print image on a scale of 0–100. The team scored print images with a quality score greater than 65 as “Good,” 45–65 as “Bad,” and lower than 45 as “Ugly.”

Results

Examiner Conclusions Among Good, Bad and Ugly Latent Prints

Print quality (as categorized by Good, Bad and Ugly) was significantly associated with examiner conclusions and ultimate accuracy. Note: N = 373 prints. There were 133 Good prints, 114 Bad prints and 126 Ugly prints

FOCUS ON THE FUTURE

 

The correct source for prints submitted to AFIS appeared in the top ten results only 41.7% of the time, lower than an estimated 53.4% of the time based on the quality of such prints. This highlights the potential for blind proficiency testing to gauge the accuracy of the entire system –– including AFIS.

This study only included prints that had the potential to be submitted to AFIS, dismissing images not labeled as latent prints. Future studies should include a full range of images to more closely reflect real-world casework.

Insights: Recognition of Overlapping Elliptical Objects in a Binary Image

INSIGHTS

Recognition of Overlapping Elliptical Objects in a Binary Image

OVERVIEW

A common objective in bloodstain pattern analysis is identifying the mechanism that produced the pattern, such as gunshots or blunt force impact. Existing image-based methods often ignore overlapping objects, which can limit the number of usable stains. Researchers funded by CSAFE established a novel technique for image analysis to provide more accurate data.

Lead Researchers

Tong Zou
Tianyu Pan
Michael Taylor
Hal Stern

Journal

Pattern Analysis and Applications

Publication Date

4 May 2021

Publication Number

IN 121 BPA

Goals

1

Develop a method to classify shapes in complex images.

2

Apply this method to data of different types including bloodstain patterns.

3

Compare the new method’s accuracy to existing methods.

Approach and Methodology

When analyzing bloodstain patterns, the individual stains may appear as clumps comprised of overlapping objects (e.g., droplets). Zou et al. developed a new computational method that identifies the individual objects making up each clump. The method proceeds as follows:

1

Generate a large number of elliptical shapes that match the overall contours of the clump.

2

Use an empirical measure of fit to reduce the set of candidate ellipses.

3

Identify concave points in the clump’s contour and set up an optimization to determine the best fitting ellipses.

Image Processing

Examples of ellipse fitting results for synthetic data. (a) Original binary image; (b) Ground truth; (c) DEFA model; (d) BB model; (e) DTECMA. The number of true ellipses increases from 2 (leftmost column) to 9 (rightmost column). Rows (c) and (d) are results from existing methods; row (e) gives results for Zou et al.’s DTECMA algorithm.

The researchers tested the method on a set of over 1,600 test images with overlapping shapes, emulating bloodstains (row a).

Study Results

  • Across four different metrics, the new approach outperformed existing approaches.
  • The current methods struggled to correctly recognize shapes as the number of ellipses per picture grew. Only the new method was able to maintain consistent accuracy.

Examples of ellipse fitting results for synthetic data. (a) Original binary image; (b) Ground truth; (c) DEFA model; (d) BB model; (e) DTECMA. The number of true ellipses increases from 2 (leftmost column) to 9 (rightmost column). Rows (c) and (d) are results from existing methods; row (e) gives results for Zou et al.’s DTECMA algorithm.

Focus on the future

 

The new approach to identifying elliptical-shaped objects in complex images shows marked improvement over current methods. This is demonstrated using simulated data and biological data for which the underlying truth is known.

While these results are promising, there is currently no way to quantify the performance of these models for bloodstain pattern analysis. The paper shows that the new method seems to do well based on visual inspection.

The next stage of the research is to use the identified ellipses as summaries of the images that can be used to develop statistical methods for analyzing bloodstain patterns.

Insights: Mt. Everest— We Are Going to Lose Many

INSIGHTS

Mt. Everest—
We Are Going to Lose Many:

A Survey of Fingerprint Examiners’ Attitudes Towards Probabilistic Reporting

OVERVIEW

Traditionally, forensic examiners tend to use categorical language in their reports, presenting evidence in broad terms such as “identification” or “exclusion.” There have been efforts in recent years to promote the use of more probabilistic language, but many examiners have expressed concerns about the proposed change.

Researchers funded by CSAFE surveyed fingerprint examiners to better understand how examiners feel about probabilistic reporting and to identify obstacles impeding its adoption.

Lead Researchers

H. Swofford
S. Cole 
V. King

Journal

Law, Probability, and Risk

Publication Date

7 April 2021

Publication Number

IN 120 IMPL

Goals

1

Learn what kind of language forensic examiners currently use when reporting evidence.

2

Gauge attitudes toward probabilistic reporting and the reasoning behind those attitudes.

3

Explore examiners’ understanding of probabilistic reporting.

The Study

Results

Only 10% of participants reported using probabilistic language
0%
Only 2% actually used probabilistic language for the open-response question.
0%
58% felt that
probabilistic language was
not an appropriate direction
for the field.
0%
  • The most common concern was that “weaker,” more uncertain terms could be misunderstood by jurors or used by defense attorneys to “undersell” the strength of their findings.
  • Another concern was that a viable probabilistic model was not ready for use in a field as subjective as friction ridge analysis –– and may not even be possible.
  • While many felt that probabilistic language may be more accurate –– they preferred categorical terms as “stronger” –– and more in line with over a century of institutional norms.

Focus on the future

 

The views of the participants were not a handful of outdated “myths” that need to be debunked, but a wide and varied array of strongly held beliefs. Many practitioners are concerned about “consumption” issues –– how lawyers, judges, and juries will  understand the evidence –– that are arguably outside their role as forensic scientists.

While many participants expressed interest in probabilistic reporting, they also felt they were not properly trained to understand probabilities since it has never been a formal requirement. Additional education and resources could help examiners more confidently adopt the practice.

Insights: Judges and Forensic Science Education: A national survey

INSIGHTS

Judges & Forensic Science Education:

A national survey

OVERVIEW

Forensic evidence can play a crucial role in adjudicating a criminal trial. As scientific authorities scrutinize the reliability of many forensic methods, it is important for judges to be trained and educated to make more informed decisions. Since most judges lack a scientific background, additional training may play an important role. However, the effectiveness of additional training and how it affects judges’ perception of forensic evidence is unknown.

Lead Researchers

Brandon L. Garrett
Brett O. Gardner
Evan Murphy
Patrick Grimes

Journal

Forensic Science International

Publication Date

April 2021

Publication Number

IN 119 IMPL

Goals

In collaboration with the National Judicial College (NJC), researchers conducted a survey of 164 judges from 39 states who had participated in NJC programs in order to:

Learn judges’ backgrounds and training in forensic science.

Discover their views on the reliability of modern forensic disciplines.

Understand what additional materials and training judges need to better evaluate forensic science.

The Study

1

In part one, the judges described their past experience with forensic science and estimated a percentage of past cases that dealt with forensic evidence.

2

In part two, the judges reported the
amount of training they had involving forensic science, described the availability of training materials, and identified the resources they want in the future.

3

In part three, the judges described their familiarity with statistical methods and estimated the error rates in common forensic science disciplines.

Results

37.4% past cases involving forensic evidence
0%
14.7% past cases with hearings on admissibility of evidence
0%
13.5% past cases with forensic evidence ruled inadmissible
0%
  • An overwhelming majority received training on forensic evidence through further education as a judge but suggested more of this training should occur in law school.
  • They believed that DNA evidence was the most reliable form of forensic evidence –– and that bitemarks and shoeprints were the least reliable.
  • Judges who reported more extensive training were more likely to view themselves as gatekeepers of valid forensic science testimony and reported a higher percentage of evidence they ruled inadmissible.
  • On average, judges seem to underestimate the error rate of most forensic methods, though to a much lesser extent than lay people, lawyers, or even some forensic practitioners.
0%

of judges endorsed more than one week of training specific to forensic science evidence.

Focus on the future

 

The surveyed judges typically relied on journal articles, expert testimony, case law, and further education, but noted that these resources were not readily accessible. Additional education would help judges in their role as gatekeeper to prevent “junk science” being presented at trial.

Judges expressed a desire for additional training and online resources, especially in fields they rated as more reliable. Those include digital, DNA, and toxicology evidence –– these resources would allow judges to make more informed rulings on technical subjects.