AAFS 2022 Recap: Understanding Juror Comprehension of Forensic Testimony: Assessing Jurors’ Decision Making and Evidence Evaluation

Empty Courtroom

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)


During the 74th annual American Academy of Forensic Sciences (AAFS) scientific conference, Cassidy Koolmees, a graduate student in legal psychology at Florida International University (FIU), and her colleagues presented a new study entitled Understanding Juror Comprehension of Forensic Testimony: Assessing Jurors’ Decision Making and Evidence Evaluation. This study came after assessing the findings in multiple studies conducted by CSAFE co-director Brandon Garrett et al., who found that a jury’s analysis of forensic testimony was not dependent on the strength of the language used during the testimony. Besides a slight decrease in conviction rates when inconclusive language was used, guilty verdicts remained stable across all conditions that used language indicating a match. This result suggests that the language used to express a match in forensic testimony has little impact on a jury, regardless of the strength of the language or credibility the expert claims.

Building on these findings, Koolmees and her colleagues examined whether jurors could distinguish low-quality testimony from high-quality testimony of forensic experts, using the language guidelines released by the Department of Justice (DOJ) in 2018 as an indicator of quality.

Study participants were put into one of six language-related conditions, where the number of violations of the DOJ language guidelines ranged from zero to five. Participants listened to a full mock trial that included the presentation of forensic evidence. Afterward, they were asked their verdict and how they would rate aspects of the testimony, including confidence in the verdict, clarity of forensic testimony, credibility of the forensic expert, and strength, quality, and usefulness of the forensic evidence.

Most of the dependent variables were found to have no statistical significance between conditions; confidence in the verdict, credibility of the expert, as well as the strength, usefulness, and clarity of the testimony were all consistent across groups.

The only statistically significant difference found between conditions was in the judgment of quality. Only when comparing the conditions of zero guideline violations and four and five violations were there any changes in guilty verdicts, signifying jurors may notice a change in the quality of forensic testimony only when the quality is severely low.

Overall, the study found that, similarly to previous studies, mock jurors are not sensitive to the quality of forensic evidence or to the differences in language used by the experts presenting said evidence. Further research by the FIU group currently being finalized includes versions of the study where jurors are made aware of the DOJ language guidelines before they are presented with expert testimony.

The researchers of the study share CSAFE’s desire for continued education for those involved in criminal trials. Suggestions put forth include simplified jury instructions and a video presentation of instructions. These proposed reforms align with the CSAFE goal of increasing education in forensic evidence for jurors, attorneys, judges, and other relevant parties.

CSAFE supports and oversees substantial contributions to training and education for a wide range of forensic science stakeholders. Explore CSAFE’s available learning opportunities and current training and education research projects at https://forensicstats.org/training-and-education/.


Publications referenced by Koolmees in her study:

How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information, and Error Acknowledgment
Brandon Garrett and Gregory Mitchell

Mock Jurors’ Evaluation of Firearm Examiner Testimony
Brandon Garrett, Nicholas Scurich and William Crozier

AAFS 2022 Recap: An Internal Validation Study of the TopMatch 3D Scanner for Cartridge Cases

A CSAFE lab technician loads a tray of cartridge cases into the TopMatch 3D scanner.

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)


At the 74th annual American Academy of Forensic Sciences (AAFS) scientific conference, Kayli Carrillo, a doctoral candidate at Sam Houston State University, presented a study that showed promising results for the future use of virtual microscopy in assisting forensic examiners with analyzing ballistic evidence. The study, performed by Carrillo and her colleagues at the Harris County Institute of Forensic Sciences in Houston, Texas, utilized a TopMatch VCM system identical to the microscopes used in CSAFE’s ballistics lab. CSAFE’s lab is part of the Roy J. Carver High Resolution Microscopy Facility at Iowa State University.

The internal validation study involved three stages of examination with known and unknown sourced cartridge cases analyzed by multiple examiners. The three phases introduced very few inconclusive determinations, and no matches were determined to be false positives or false negatives. These results indicate a study with a very high internal validity, which shows that the use of virtual comparison microscopy, specifically TopMatch software, can aid in forensic analysis.

Continued research will adopt a fourth step to further evaluate the inconclusive determinations made in the study by examiners. This step will compare such conclusions found when using VCM versus light comparison microscopy, alternatively known as 2D microscopy. Based on the promising findings of this study, the Harris County Institute of Forensic Sciences plans to utilize TopMatch microscopy in the analysis of their cartridge cases.

CSAFE researchers have made great strides in developing statistical and scientific foundations for assessing and matching firearms and toolmarks. Learn more at https://forensicstats.org/firearms-and-toolmark-analysis/.

NIST Releases Results from a Black Box Study for Digital Forensic Examiners

NIST Black Box Study for Digital Forensic Examiners

The National Institute of Standards and Technology (NIST) has published the results from a black box study for digital forensic examiners. The study, released in February 2022, describes the methodology used in the study and summarizes the results.

The study was conducted online and open to anyone in the public or private sectors working in the digital forensics field. Survey participants examined and reported on the simulated digital evidence from casework-like scenarios. NIST said study’s goal was to assess the performance of the digital forensic community as a whole.

Results from a Black-Box Study for Digital Forensic Examiners (NISTIR 8412) can be viewed at https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8412.pdf.

From Results from a Black-Box Study for Digital Forensic Examiners, page 33:

Summary Key Takeaways

Despite the limitations of the study, two key takeaways about the state of the digital evidence discipline emerged:

  • Digital forensics examiners showed that they can answer difficult questions related to the analysis of mobile phones and personal computers. Questions ranged from basic, such as identifying who the user of the phone had contacted, to advanced questions that related to the use of the TOR browser.
  • The response to the study underscored the size, variety, and complexity of the field. The study received responses from examiners working in international, federal, state, local government, and private labs whose major work included law enforcement, defense, intelligence, and incident response/computer security. There were also responses from people outside of these areas.


Results Available from OSAC Registry Implementation Survey

OSAC Registry Implementation Survey: 2021 Report

The Organization of Scientific Area Committees for Forensic Science (OSAC) released the results from its first annual Registry Implementation Survey. The report, published in February 2022, provides a detailed look at the respondents and the implementation status of the 46 standards represented in the survey.

In the summer of 2021, OSAC released the survey targeted at forensic science service providers from across the country. It was designed to help OSAC better understand how the standards on the OSAC registry are being used, the challenges around standards implementation and what support is needed to improve it.

The OSAC Registry Implementation Survey: 2021 Report is available at https://www.nist.gov/osac/osac-registry-implementation-survey.

From page 10 of the OSAC Registry Implementation Survey: 2021 Report:

Priority for Implementing Standards
When asked what priority survey participants considered standards implementation for their organization, half of the respondents (50%) said it was a medium priority, or important. This was followed by 34% of respondents indicating that implementation was a high priority, or very important. Twenty-three respondents (14.8%) indicated that implementation was a low priority or not a priority at this time (Figure 4).

Figure 4. Priorities for Standards Implementation
Click on image to enlarge. Figure 4. Priorities for Standards Implementation

A Study on Improving Forensic Decision Making will be the Topic of CSAFE’s February Webinar

Figure 2 from the study shows sources of cognitive bias in sampling, observations, testing strategies, analysis and/or conclusions, that impact even experts. These sources of bias are organized in a taxonomy of three categories: case-specific sources (Category A), individual-specific sources (Category B) and sources that relate to human nature (Category C).

A new study that proposes a broad and versatile approach to strengthening expert decision making will be the focus of an upcoming Center for Statistics and Applications in Forensic Evidence (CSAFE) webinar.

The webinar, Improving Forensic Decision Making: A Human-Cognitive Perspective, will be held Thursday, Feb. 17 from 12–1 p.m. CST. It is free and open to the public.

Itiel Dror
Itiel Dror

During the webinar, Itiel Dror, a cognitive neuroscience researcher from the University College London, will discuss his journal article, Linear Sequential Unmasking–Expanded (LSU-E): A general approach for improving decision making as well as minimizing noise and bias. The article was published in Forensic Science International: Synergy and co-authored by Jeff Kukucka, associate professor of psychology at Towson University.

In the article, the authors introduce Linear Sequential Unmasking (LSU-E), an approach that can be applied to all forensic decisions, and also reduces noise and improves decisions “by cognitively optimizing the sequence of information in a way that maximizes information utility and thereby produces better and more reliable decisions.”

From the Abstract:

In this paper, we draw upon classic cognitive and psychological research on factors that influence and underpin expert decision making to propose a broad and versatile approach to strengthening expert decision making. Experts from all domains should first form an initial impression based solely on the raw data/evidence, devoid of any reference material or context, even if relevant. Only thereafter can they consider what other information they should receive and in what order based on its objectivity, relevance, and biasing power. It is furthermore essential to transparently document the impact and role of the various pieces of information on the decision making process. As a result of using LSU-E, decisions will not only be more transparent and less noisy, but it will also make sure that the contributions of different pieces of information are justified by, and proportional to, their strength.

To register for the February webinar, visit https://forensicstats.org/events/.

The CSAFE Spring 2022 Webinar Series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.

ASCLD Forensic Research Committee Provides Useful Resources for Researchers and Practitioners

ASCLD Forensic Research Committee

The American Society of Crime Laboratory Directors (ASCLD) Forensic Research Committee (FRC) works to identify the research, development, technology and evaluation needs and priorities for the forensic science community. The FRC has several initiatives and resources available on its website to aid both researchers and practitioners. Below, we highlight a few of those resources.

For more information, visit the FRC website: https://www.ascld.org/forensic-research-committee/.

Collaboration Hub

The FRC collaboration hub hosts the Researcher-Practitioner Collaboration Directory. The directory helps connect researchers with ongoing projects to practitioners who are willing to participate in the studies. The searchable directory includes descriptions of each project, including an abstract and the estimated participant time involved. Researchers can easily submit their projects for inclusion in the directory by completing an online form.

ASCLD Research Priorities

FRC has created a list of high-impact research opportunities that help to identify key areas where impactful research would support the forensic science community and enhance lab operations. The research priorities list for 2022-2024 can be downloaded at https://www.ascld.org/wp-content/uploads/2021/12/ASCLD-Research-Priority-Areas-2022-2024.pdf.

Lightning Talks

The FRC hosts a virtual “Lightning Talks” series to highlight new and emerging research in all areas of forensic science. Each episode features three short talks given by practitioners, researchers or students. Previous Lightning Talks are archived on FRC’s YouTube page.

Laboratories and Educators Alliance Program (LEAP)

LEAP facilitates collaborative research between academia and forensic science laboratories. This program identifies forensic science needs and provides a platform for laboratories, researchers and students to seek projects aligning with their mutual research capabilities. The FRC website includes a map of LEAP partners, a short video explaining LEAP and sign-up forms for crime labs and universities. LEAP is a joint effort between ASCLD and the Council of Forensic Science Educators (COFSE).

Validation and Evaluation Repository

The Validation and Evaluation Repository is a list of unique validations and evaluations conducted by forensic labs and universities. ASCLD’s summary of the repository states, “It is ASCLD’s hope that this listing will foster communication and reduce unnecessary repetition of validations and evaluations to benefit the forensic community.” The searchable repository is available at https://www.ascld.org/validation-evaluation-repository/

Research Executive Summaries

The Future Forensics Subcommittee of the FRC has initiated the publication of brief executive summaries of the recent literature within the forensic sciences. The summaries are written by ASCLD members and are meant to provide a brief overview of noteworthy publications and trends in the literature. Currently, the summaries include reviews in the areas of fingermarks, controlled substances, paint and glass evidence, forensic toxicology, forensic biology, gunshot residue analysis and firearms and toolmarks.

CSAFE’s October Webinar will Feature New Black Box Study on Bloodstain Pattern Analysis

Figure 1 from the article shows examples of bloodstain patterns used in the study.

A new study assessing the accuracy and reproducibility of practicing bloodstain pattern analysts’ conclusions will be the focus of an upcoming Center for Statistics and Applications in Forensic Evidence (CSAFE) webinar.

The webinar Bloodstain Pattern Analysis Black Box Study will be held Thursday, Oct. 14 from 11 a.m.–noon CDT. It is free and open to the public.

During the webinar, Austin Hicklin, director at the Forensic Science Group; Paul Kish, a forensic consultant; and Kevin Winer, director at the Kansas City Police Crime Laboratory, will discuss their recently published article, Accuracy and Reproducibility of Conclusions by Forensic Bloodstain Pattern Analysts. The article was published in the August issue of Forensic Science International.

From the Abstract:

Although the analysis of bloodstain pattern evidence left at crime scenes relies on the expert opinions of bloodstain pattern analysts, the accuracy and reproducibility of these conclusions have never been rigorously evaluated at a large scale. We investigated conclusions made by 75 practicing bloodstain pattern analysts on 192 bloodstain patterns selected to be broadly representative of operational casework, resulting in 33,005 responses to prompts and 1760 short text responses. Our results show that conclusions were often erroneous and often contradicted other analysts. On samples with known causes, 11.2% of responses were erroneous. The results show limited reproducibility of conclusions: 7.8% of responses contradicted other analysts. The disagreements with respect to the meaning and usage of BPA terminology and classifications suggest a need for improved standards. Both semantic differences and contradictory interpretations contributed to errors and disagreements, which could have serious implications if they occurred in casework.

The study was supported by a grant from the U.S. National Institute of Justice. Kish and Winer are members of CSAFE’s Research and Technology Transfer Advisory Board.

To register for the October webinar, visit https://forensicstats.org/events.

The CSAFE Fall 2021 Webinar Series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.

CSAFE researchers are also undertaking projects to develop objective analytic approaches to enhance the practice of bloodstain pattern analysis. Learn more about CSAFE’s BPA projects at forensicstats.org/blood-pattern-analysis.

OSAC Public Update Meeting Set for Wednesday, Sept. 29


Plan to attend the Organization of Scientific Area Committees (OSAC) for Forensic Science Public Update Meeting on Sept. 29, 2021, from 1–4:30 p.m. EDT.

This virtual event will feature presentations from the chairs of OSAC’s Forensic Science Standards Board and seven Scientific Area Committees. Each presenter will describe the standards their unit is working on and discuss research gaps, challenges, and priorities for the coming year. Attendees will have the opportunity to ask questions and provide feedback. There is no fee to attend, but registration is required.

OSAC works to strengthen forensic science by facilitating the development of technically sound standards and promoting the use of those standards by the forensic science community. OSAC’s 800-plus members and affiliates draft and evaluate forensic science standards through a transparent, consensus-based process that allows for participation and comment by all stakeholders. For more information about OSAC and its programs, visit https://www.nist.gov/osac.

The meeting agenda and registration information is available on the OSAC website.

OSAC Registry Implementation Survey


The Organization of Scientific Area Committees for Forensic Science (OSAC) is asking forensic science service providers to complete an online survey to understand how organizations are using standards on the OSAC Registry and what support they may need to improve standards implementation.

According to the OSAC Registry Implementation Survey webpage, “OSAC wants to better understand how the standards on the Registry are currently being used, the challenges around standards implementation, and what support is needed to improve it. The OSAC Registry Implementation Survey will be a tool we use to collect this information on an annual basis.”

OSAC says that after the survey closes on Aug. 31, they will analyze the responses, and the results will be published in OSAC’s fall newsletter at the end of October.

Forensic science service providers across the country are encouraged to complete this survey (one response per location). It will take approximately 15-45 minutes to complete and must be done in one sitting.

More information and a link to the survey are available at https://www.nist.gov/osac/osac-registry-implementation-survey.

GAO Releases a Second Report on Forensic Science Algorithms

From GAO Report 21-435
GAO-21-435 — Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes
GAO-21-435 — Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes

In July, the U.S. Government Accountability Office (GAO) released the report, Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes.

This is the second report in a two-part series of technology assessments responding to a request to examine the use of forensic algorithms in law enforcement. The first report, Forensic Technology: Algorithms Used in Federal Law Enforcement (GAO-20-479SP), described forensic algorithms used by federal law enforcement agencies and how they work.

In this report, GAO conducted an in-depth analysis of three types of algorithms used by federal law enforcement agencies and selected state and local law enforcement agencies: latent print, facial recognition and probabilistic genotyping. The report discusses

  1. the key performance metrics for assessing latent print, facial recognition and probabilistic genotyping algorithms;
  2. the strengths of these algorithms compared to related forensic methods;
  3. the key challenges affecting the use of these algorithms and the associated social and ethical implications; and
  4. options policymakers could consider to address these challenges.

GAO developed three policy options that could help address challenges related to law enforcement use of forensic algorithms. The policy options identify possible actions by policymakers, which may include Congress, other elected officials, federal agencies, state and local governments and industry.

In conducting this assessment, GAO interviewed federal officials, select non-federal law enforcement agencies and crime laboratories, algorithm vendors, academic researchers and nonprofit groups. It also convened an interdisciplinary meeting of 16 experts with assistance from the National Academies of Sciences, Engineering, and Medicine; and reviewed relevant literature. CSAFE co-director Karen Kafadar, professor and chair of statistics at the University of Virginia, participated in the meeting, as well as Will Guthrie, a CSAFE Research and Technology Transfer Advisory Board member. Guthrie is chief of the Statistical Engineering Division at the National Institute of Standards and Technology.

Read More:

Learn More:

CSAFE researchers are developing open-source software tools, allowing for peer-reviewed, transparent software for forensic scientists and researchers to apply to forensic evidence analysis. These automatic matching algorithms provide objective and reproducible scores as a foundation for a fair judicial process. Learn more about CSAFE’s open-source software tools.