New Study Explores Public Beliefs About the Reliability of Forensic Science

A forensic scientist looking at prints on a computer screen.

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence

 

As with many scientific fields, forensic science has faced public and expert criticism since its conception. In response, the discipline must find ways to increase confidence in its methods and usage. One potential means of doing so was examined in a recent paper by Carlos Miguel Ibaviosa and Jason M. Chin, who posited that increased transparency and openness could solve forensic science’s public image problem.

The paper “Beyond CSI: Calibrating public beliefs about the reliability of forensic science through openness and transparency” looks at the criticisms surrounding forensic science in three stages. To begin, the authors look closely at studies that have examined the CSI Effect, which hypothesizes that procedural shows like CSI, which show forensic science to be infallible, give the public an unrealistic view of the field, which could then impact how forensic evidence is weighed in court. Most hypotheses assume this effect causes the public to view forensic science in an overly-positive and trusting way. However, after reviewing studies with these hypotheses, the paper finds that findings are inconsistent and methods are poorly constructed.

After determining that the general public is not strongly swayed by versions of forensic science they see depicted on TV, the authors review studies performed over the past 11 years that have found the public’s view of forensic science to be one of skepticism.

Although the five studies reviewed were not all performed by the same researchers, their methods were relatively comparable, and all looked at participants’ ratings of reliability for multiple different forensic tests, including DNA and bitemark evidence.

Overall, data suggested a disconnect between what experts and research regard as accurate and what the public understands as being accurate.

An example of this can be found in DNA analysis. While viewed as the gold standard within the forensic science community, two studies conducted 11 years apart showed a decrease in public trust for DNA—from a 94% reliability rating in a 2008 study to 83% in 2019.

Also miscalibrated from actual scientific findings were public views of unvalidated methods such as bitemark analysis. One study conducted in 2015 found that the public rating of the reliability for bitemark analysis was 89.26%, higher than the rating for the much more reliable fingerprint analysis, which was deemed 88.15% reliable.

The differences in language, sample size, and other sample characteristics across the studies prevent a definitive conclusion about public views of forensic science over time and their relation to expert findings on reliability. However, the authors suggest the findings still indicate a cause for concern for forensic evidence practitioners and others in the forensic science field.

Following the reviews of these studies and the lack of impression the CSI effect has on the public, the question for the authors of the paper became what, then, was responsible for the disconnect regarding reliability of forensic evidence between experts and the public?

The suggested answer is that the ability of DNA methodologies to detect previous errors, that in many cases led to wrongful convictions, was widely reported on by news coverage, as were the reports by academic bodies criticizing some of the methods responsible for such miscarriages of justice. This could have contributed to an overall public distrust in forensic science that must now be mitigated, and thus the paper turns to possible ways in which the field can bolster its credibility.

The paper’s recommendations for improved public perception and credibility focus on three components supported by research:

  1. Epistemic trust. Epistemic trust is the trust in knowledge given to us by others. This trust, on the part of the public, consists of the perceived competence of the researcher, the benevolence they show regarding improving society, and the integrity with which they follow scientific principles. Acknowledging mistakes and uncertainty in their work secures public epistemic trust in a researcher.
  2. The promotion of openness and transparency in the scientific field. When this is done, high-quality science will be distinguishable from low-quality science, as the public and scientists involved will be able to review the data and methods of different studies. Even an expressed intention of transparency has been shown to strengthen the epistemic trust of the field.
  3. Alignment with public expectations. Studies found that participants view questionable research practices, such as selective reporting, as highly morally unacceptable, despite their use not being outwardly illegal. Following these preferences will show a willingness to engage with the public as well as a dedication to good methodology.

Read the Study

Beyond CSI: Calibrating public beliefs about the reliability of forensic science through openness and transparency, Science & Justice, published online Feb. 17, 2022.

OSAC Footwear & Tire Subcommittee Develops Process Map

An overview of the Footwear and Tire Examination Process Map developed by the OSAC Footwear & Tire Subcommittee

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)

 

On June 8, 2022, the Organization of Scientific Area Committees for Forensic Science’s (OSAC) Footwear & Tire Subcommittee published a current practice document for footwear and tire examination.

The 37-page document consists of multiple process maps that cover a range of practices in the field of footwear and tire examination, including casts, gel lifts, known and unknown assessments, and different types of substrates with or without the presence of blood. Additionally, the flowcharts cover administrative processes such as verification and reporting, technical assessments, and administrative assessments.

The current practice document defines its purpose as five-fold:

  • help improve efficiencies while reducing errors,
  • highlight gaps where further research or standardization would be beneficial,
  • assist with training new examiners,
  • develop specific laboratory policies, and
  • identify best practices.

The document represents current practices instead of best practices and therefore does not necessarily endorse all the methodologies shown in the multiple process maps to ensure practitioners can find the process their lab uses. According to an article published by the National Institute of Standards and Technology (NIST), David Kanaris, Chair of the OSAC subcommittee, plans to release a more interactive version of the document in the future.

NIST facilitated the development of this process map through a collaboration between the NIST Forensic Science Program and OSAC’s Footwear & Tire Subcommittee.

Other OSAC subcommittees have released their process maps for other forensic science areas, including speaker recognition, DNA, friction ridge examinations, and firearms examinations.

CSAFE researchers Alicia Carriquiry, CSAFE director, and Jacqueline Speir, an associate professor at West Virginia University, are members of the OSAC Footwear & Tire Subcommittee.

Learn more about CSAFE’s work on footwear impression analysis at https://forensicstats.org/footwear/.

Podcast Episode Discusses Weakness in Eyewitness Identifications and Their Use in the Courtroom

Empty Courtroom

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)

 

Eyewitness identification was discussed in episode seven of The Ongoing Transformation, a podcast sponsored by Arizona State University and the National Academies of Sciences, Engineering, and Medicine (NAS). Jed Rakoff, senior United States district judge for the Southern District of New York and who worked with the National Academics to publish the 2014 report on eyewitness identification, spoke about his book, “Why the Innocent Plead Guilty and the Guilty Go Free: And Other Paradoxes of Our Broken Legal System.”

Although eyewitness identification is a form of evidence very compelling to jurors, Rakoff suggested there are many reasons such evidence should be met with more skepticism. In the 375 exonerations the Innocence Project has been involved with since 1989, it was found that nearly 70% of them involved eyewitness misidentification.

Some of the reasons for these misidentifications are simple situational causes, such as lighting, an obscured view, and a tendency to focus on a weapon rather than on the details of the person handling it.

Other reasons to be cautious of eyewitness testimony are more psychological. One concern Rakoff mentions is the racial effect, in which members of the same race are more capable of distinguishing minute facial details compared to a person of a different race. Another factor at play is memory. In an example given by Rakoff, when an eyewitness begins going through a photo lineup, they may have an image of the person they saw for a few moments in rough detail in their minds. After picking a photo, the details from their sighting and the details in the photo begin to merge until the eyewitness testifying at trial months later is certain of their wrongful identification.

A solution for decreasing this high number of eyewitness misidentifications suggested by Rakoff is to educate prosecutors on the fallibility of memory and vision and identify when those flaws affect an identification. A way he suggests this could be done is to replicate a program required of all federal judges in the United States he dubbed “baby judge school,” but whose technical name is the “Phase 1 Orientation Seminar for Newly Appointed District Judges.” This program educates judges on many concepts of the legal system, from ethical concerns they’ll need to be aware of, how to organize caseloads, and how to make evidentiary decisions. Rakoff believes a similar program could teach prosecutors more about eyewitness identifications and their limitations.

Rakoff is also in favor of adopting a U.K. practice in which criminal prosecutors spend six months working as a criminal defense attorney every three years. He believes that, among other things, this can provide prosecutors with important insights on how to handle forensic evidence in cases.

Regarding forensic science reform in general, Rakoff believes the National Commission on Forensic Science, created under President Obama and whose term lapsed under President Trump, should be renewed. In its four years, the commission made 59 recommendations to the Department of Justice that could also be applied to state police and prosecutors.

Additionally, Rakoff believes that The National Institute of Forensic Sciences should be created. This institute was a suggestion made in the National Academy of Science’s 2009 report ​​Strengthening Forensic Science in the United States: A Path Forward. According to the report, this institution would consist of unbiased scientists with no connections to law enforcement or crime labs, who would review different forensic science methods and determine how each could be improved.

To end the interview, Rakoff stated that despite the flaws and need for reform he’s seen in the criminal justice system, he’s optimistic for the future.

To listen to or read the transcript from Episode 7: Shaky Science in the Courtroom, visit https://issues.org/episode-7-shaky-forensic-science-courtroom-rakoff/.

AAFS Cooperative Agreement with NIST Provides Standards Resources and Training to the Forensic Science Community

AAFS Standards Resources & Training

The American Academy of Forensic Sciences (AAFS) announced in December 2021 a cooperative agreement with the National Institute of Standards and Technology (NIST) to develop training, tools and resources to enhance implementation efforts and broaden awareness of forensic science standards among communities of interest.

According to the AAFS news release, “Training will address technical aspects of the standards as well as challenges, practical solutions and benefits of adoption. Resources, including auditing checklists for compliance monitoring and gap analysis, will also be developed, as well as factsheets, understandable to the layperson.”

AAFS said these resources would help advance the implementation of standards and guidelines listed on the Organization of Scientific Area Committees (OSAC) for Forensic Science’s Registry.

The standards training and resources can be found on the AAFS website at www.aafs.org/research-insights-featured/standards-resources-and-training. The resources are available at no cost to the public.

The webpage includes information about the cooperative agreement, upcoming webinars, videos on the standards, standards checklists (coming soon) and the AAFS Standards Factsheets.

The AAFS Standards Factsheets provide a summary of each standard and highlight its purpose, why it is important, and what its benefits are. AAFS notes that the factsheets are in continuous production, and more will come soon. There are currently 12 published factsheets available to download.

The factsheets include:

  • ANSI/ASB Standard 018 Standard for Validation of Probabilistic Genotyping Systems
  • ANSI/ASB Standard 020 Standard for Validation Studies of DNA Mixtures, and Development and Verification of a Laboratory’s Mixture Interpretation Protocol
  • ANSI/ASB Standard 036 Standard Practices for Method Validation in Forensic Toxicology
  • ANSI/ASB Standard 037 Guidelines for Opinions and Testimony in Forensic Toxicology
  • ANSI/ASB Standard 040 Standard for Forensic DNA Interpretation and Comparison Protocols
  • ANSI/ASB Standard 061 Firearms and Toolmarks 3D Measurement Systems and Measurement Quality Control
  • ASTM E2329-17 Standard Practice for Identification of Seized Drugs
  • ASTM E2548-16 Standard Guide for Sampling Seized Drugs for Qualitative and Quantitative Analysis
  • ASTM E3245-20e1 Standard Guide for Systematic Approach to the Extraction, Analysis, and Classification of Ignitable Liquids and Ignitable Liquid Residues in Fire Debris Samples
  • ASTM E3260-21 Standard Guide for forensic Examination and Comparison of Pressure Sensitive Tapes
  • NFPA-921 Guide to Fire and Explosion Investigations
  • NFPA-1033 Standard for Professional Qualifications for Fire Investigations

The Center for Statistics and Applications in Forensic Evidence (CSAFE), a NIST Center of Excellence, has several researchers who serve on the OSAC Forensic Science Standards Board (FSSB), subcommittees and resource task groups, including Jeff Salyards, a CSAFE research scientist, who serves as an FSSB member at large, and Danica Ommen, a CSAFE researcher, who serves as the chair of the Statistics Task Group. Learn more about how these groups help the development of scientifically sound standards and guidelines for the forensic science community at https://www.nist.gov/osac/osac-organizational-structure.

NIST Seeks Public Comment on Draft Report of Digital Forensic Methods

Working on a Laptop

The National Institute of Standards and Technology (NIST) has published Digital Investigation Techniques: A NIST Scientific Foundation Review. The draft report will be open for public comments through July 11, 2022.

The report reviews the methods that digital forensic experts use to analyze evidence from computers, mobile phones and other electronic devices.

According to a news release from NIST, the authors of the report examined peer-reviewed literature, documentation from software developers, test results on forensic tools, standards and best practices documents and other sources of information.

The news release also stated that the report discusses several challenges that digital forensic experts face, including the rapid pace of technological change, and recommends better methods for information-sharing among experts and a more structured approach to testing forensic tools.

NIST will host a webinar to discuss the draft report and its findings on June 1 from 1–3 p.m. EDT. For more information about the webinar and to register, visit www.nist.gov/news-events/events/2022/06/webinar-digital-investigation-techniques-nist-scientific-foundation.

Read the full news release on the report at www.nist.gov/news-events/news/2022/05/nist-publishes-review-digital-forensic-methods.

The Center for Statistics and Applications in Forensic Evidence (CSAFE), a NIST Center of Excellence, conducts research addressing the need for forensic tools and methods for digital evidence. Learn more about this research at forensicstats.org/digital-evidence.

The Innocence Project: 30 Years of Advocating for Justice Reform

Innocence Project

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)

 

As the Innocence Project enters its 30th year, Christina Swarns, executive director of the Innocence Project, reflects in an open letter on the challenges and opportunities that lie ahead.

In the letter, In the Vanguard of Justice Reform: The Road Ahead, Swarns takes inventory of what the Innocence Project has learned throughout its three decades in the field of forensic science and reminds all of us in the discipline what the path forward will look like.

Although the Innocence Project primarily works with DNA evidence, the organization plans to extend its advocation to some cases dealing with non-DNA evidence, including research areas CSAFE specializes in. Their foundational pillars of Restoring Freedom, Transforming Systems and Advancing the Movement have relevance for not only CSAFE but for all parties interacting with the criminal justice system.

Expanding Knowledge

One of the Innocence Project’s goals is to create a comprehensive literacy program to educate all players in the criminal process on the science of forensic evidence, including judges, attorneys and jurors. The program will help legal professionals understand the foundations of scientific evidence with the goal of reducing the rate of wrongful convictions based on the misapplication of forensic science. CSAFE is currently working with the Innocence Project on this program.

Swarns writes, “We will launch an ambitious scientific literacy program to educate system actors — from public defenders, to prosecutors, to judges — about science and the limits of forensic evidence. Because too many attorneys have too little grasp of the foundations of scientific evidence, we believe that, with this program, we can and will reduce the rate of wrongful convictions based on the misapplication of forensic science. This program — which we are undertaking with leaders in the field like the Center for Statistics and Applications in Forensic Evidence (CSAFE) — will help legal professionals understand the basics of the evidence in the cases they handle.” 

The Innocence Project collaborates with CSAFE to increase and improve forensic science literacy. One of the results of this collaboration was contributing to a special issue of Significance Magazine, published in 2019 and dedicated solely to articles regarding forensic science and statistics.

Overturning Wrongful Convictions

Based on research done by the National Registry of Exonerations, 52 percent of the wrongful conviction cases handled by the Innocence Project have been due in part to misapplications of forensic evidence. Some examples of errors made include the use of unreliable evidence, misleading expert testimony and the submission of forensic sciences that have been discredited. Following the recommendations of multiple reports, including the National Academy of Sciences’ 2009 report Strengthening Forensic Science in the United States: A Path Forward, the Innocence Project will continue working with lawmakers to create legislation that allows a retrial on the basis of discredited science.

One of the problems underlying the use of unreliable forensic evidence in a criminal trial is the insufficient validation of the scientific methods being presented. To increase the validity, and therefore the quality, of analysis methods being used, more research must be done. CSAFE is one of the organizations dedicated to such reform by conducting research on promising yet under-analyzed types of forensic evidence, such as footwear impression analysis. As well as working to bolster the validity of existing forensic evidence, CSAFE also researches new avenues in an evidence type when a previous method shows itself to be unreliable, such as the former use of comparative bullet lead analysis. Current research by CSAFE now looks into firearm analysis through toolmark comparison in both bullets and cartridge cases. Through these large and well-constructed studies, organizations like CSAFE further the potential of new forensic evidence analysis and bolsters the public opinion of forensic science.

Learn more about CSAFE’s key research areas in probability and statistics for pattern and digital evidence, cross-cutting issues and training and education at https://forensicstats.org/our-research/.

AAFS 2022 Recap: Understanding Juror Comprehension of Forensic Testimony: Assessing Jurors’ Decision Making and Evidence Evaluation

Empty Courtroom

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)

 

During the 74th annual American Academy of Forensic Sciences (AAFS) scientific conference, Cassidy Koolmees, a graduate student in legal psychology at Florida International University (FIU), and her colleagues presented a new study entitled Understanding Juror Comprehension of Forensic Testimony: Assessing Jurors’ Decision Making and Evidence Evaluation. This study came after assessing the findings in multiple studies conducted by CSAFE co-director Brandon Garrett et al., who found that a jury’s analysis of forensic testimony was not dependent on the strength of the language used during the testimony. Besides a slight decrease in conviction rates when inconclusive language was used, guilty verdicts remained stable across all conditions that used language indicating a match. This result suggests that the language used to express a match in forensic testimony has little impact on a jury, regardless of the strength of the language or credibility the expert claims.

Building on these findings, Koolmees and her colleagues examined whether jurors could distinguish low-quality testimony from high-quality testimony of forensic experts, using the language guidelines released by the Department of Justice (DOJ) in 2018 as an indicator of quality.

Study participants were put into one of six language-related conditions, where the number of violations of the DOJ language guidelines ranged from zero to five. Participants listened to a full mock trial that included the presentation of forensic evidence. Afterward, they were asked their verdict and how they would rate aspects of the testimony, including confidence in the verdict, clarity of forensic testimony, credibility of the forensic expert, and strength, quality, and usefulness of the forensic evidence.

Most of the dependent variables were found to have no statistical significance between conditions; confidence in the verdict, credibility of the expert, as well as the strength, usefulness, and clarity of the testimony were all consistent across groups.

The only statistically significant difference found between conditions was in the judgment of quality. Only when comparing the conditions of zero guideline violations and four and five violations were there any changes in guilty verdicts, signifying jurors may notice a change in the quality of forensic testimony only when the quality is severely low.

Overall, the study found that, similarly to previous studies, mock jurors are not sensitive to the quality of forensic evidence or to the differences in language used by the experts presenting said evidence. Further research by the FIU group currently being finalized includes versions of the study where jurors are made aware of the DOJ language guidelines before they are presented with expert testimony.

The researchers of the study share CSAFE’s desire for continued education for those involved in criminal trials. Suggestions put forth include simplified jury instructions and a video presentation of instructions. These proposed reforms align with the CSAFE goal of increasing education in forensic evidence for jurors, attorneys, judges, and other relevant parties.

CSAFE supports and oversees substantial contributions to training and education for a wide range of forensic science stakeholders. Explore CSAFE’s available learning opportunities and current training and education research projects at https://forensicstats.org/training-and-education/.

 

Publications referenced by Koolmees in her study:

How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information, and Error Acknowledgment
Brandon Garrett and Gregory Mitchell

Mock Jurors’ Evaluation of Firearm Examiner Testimony
Brandon Garrett, Nicholas Scurich and William Crozier

AAFS 2022 Recap: An Internal Validation Study of the TopMatch 3D Scanner for Cartridge Cases

A CSAFE lab technician loads a tray of cartridge cases into the TopMatch 3D scanner.

By Samantha Springer, a research assistant at the Center for Statistics and Applications in Forensic Evidence (CSAFE)

 

At the 74th annual American Academy of Forensic Sciences (AAFS) scientific conference, Kayli Carrillo, a doctoral candidate at Sam Houston State University, presented a study that showed promising results for the future use of virtual microscopy in assisting forensic examiners with analyzing ballistic evidence. The study, performed by Carrillo and her colleagues at the Harris County Institute of Forensic Sciences in Houston, Texas, utilized a TopMatch VCM system identical to the microscopes used in CSAFE’s ballistics lab. CSAFE’s lab is part of the Roy J. Carver High Resolution Microscopy Facility at Iowa State University.

The internal validation study involved three stages of examination with known and unknown sourced cartridge cases analyzed by multiple examiners. The three phases introduced very few inconclusive determinations, and no matches were determined to be false positives or false negatives. These results indicate a study with a very high internal validity, which shows that the use of virtual comparison microscopy, specifically TopMatch software, can aid in forensic analysis.

Continued research will adopt a fourth step to further evaluate the inconclusive determinations made in the study by examiners. This step will compare such conclusions found when using VCM versus light comparison microscopy, alternatively known as 2D microscopy. Based on the promising findings of this study, the Harris County Institute of Forensic Sciences plans to utilize TopMatch microscopy in the analysis of their cartridge cases.

CSAFE researchers have made great strides in developing statistical and scientific foundations for assessing and matching firearms and toolmarks. Learn more at https://forensicstats.org/firearms-and-toolmark-analysis/.

NIST Releases Results from a Black Box Study for Digital Forensic Examiners

NIST Black Box Study for Digital Forensic Examiners

The National Institute of Standards and Technology (NIST) has published the results from a black box study for digital forensic examiners. The study, released in February 2022, describes the methodology used in the study and summarizes the results.

The study was conducted online and open to anyone in the public or private sectors working in the digital forensics field. Survey participants examined and reported on the simulated digital evidence from casework-like scenarios. NIST said study’s goal was to assess the performance of the digital forensic community as a whole.

Results from a Black-Box Study for Digital Forensic Examiners (NISTIR 8412) can be viewed at https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8412.pdf.

From Results from a Black-Box Study for Digital Forensic Examiners, page 33:

Summary Key Takeaways

Despite the limitations of the study, two key takeaways about the state of the digital evidence discipline emerged:

  • Digital forensics examiners showed that they can answer difficult questions related to the analysis of mobile phones and personal computers. Questions ranged from basic, such as identifying who the user of the phone had contacted, to advanced questions that related to the use of the TOR browser.
  • The response to the study underscored the size, variety, and complexity of the field. The study received responses from examiners working in international, federal, state, local government, and private labs whose major work included law enforcement, defense, intelligence, and incident response/computer security. There were also responses from people outside of these areas.

 

Results Available from OSAC Registry Implementation Survey

OSAC Registry Implementation Survey: 2021 Report

The Organization of Scientific Area Committees for Forensic Science (OSAC) released the results from its first annual Registry Implementation Survey. The report, published in February 2022, provides a detailed look at the respondents and the implementation status of the 46 standards represented in the survey.

In the summer of 2021, OSAC released the survey targeted at forensic science service providers from across the country. It was designed to help OSAC better understand how the standards on the OSAC registry are being used, the challenges around standards implementation and what support is needed to improve it.

The OSAC Registry Implementation Survey: 2021 Report is available at https://www.nist.gov/osac/osac-registry-implementation-survey.

From page 10 of the OSAC Registry Implementation Survey: 2021 Report:

Priority for Implementing Standards
When asked what priority survey participants considered standards implementation for their organization, half of the respondents (50%) said it was a medium priority, or important. This was followed by 34% of respondents indicating that implementation was a high priority, or very important. Twenty-three respondents (14.8%) indicated that implementation was a low priority or not a priority at this time (Figure 4).

Figure 4. Priorities for Standards Implementation
Click on image to enlarge. Figure 4. Priorities for Standards Implementation