Insights: Forensic Science in Legal Education

INSIGHTS

Forensic Science in Legal Education

OVERVIEW

In recent years, new expert admissibility standards in most states call for judges to assess the reliability of forensic expert evidence. However, little has been reported on the education and training law schools offer to law students regarding forensic evidence. Researchers funded by CSAFE conducted a survey to find out how many schools offer forensic science courses, and they also examine the state of forensics in legal education as a whole.

Lead Researchers

Brandon L. Garrett
Glinda S. Cooper
Quinn Beckham

Journal

Duke Law School Public Law & Legal Theory Series No. 2021-22

Publication Date

15 February 2021

Publication Number

IN 117 IMPL

Goals

1

Review the curricula of law schools across the United States.

2

Discover how many schools offer forensic science courses and what level of training they provide.

3

Discuss the survey results and their implications for the legal education system at large.

The Study

The 2009 National Academy of Sciences Report called for higher quality scientific education in law schools, citing the lack of scientific expertise among lawyers and judges as a longstanding gap. The American Bar Association then adopted a resolution calling for greater forensic sciences training among law students.

In late 2019 and Spring 2020, Garrett et al. searched online listings of courses for 192 law schools included on the 2019 News and World Report ranking list. They then sent questionnaires to faculties of these schools and requested syllabi to examine the coverage of forensic science courses the schools offered.

With the data in hand, Garrett et al. could examine the type of forensic science-related coverage at law schools in the United States.

Results

  • Only 42 different forensic science courses were identified by the survey, and several schools did not offer any of these courses at all.
  • Across the board, the courses offered were all for upper-level students, and many courses were not offered every year, further limiting students’ access to forensic science training.

Only two of the reported courses mentioned teaching statistics or quantitative methods; the vast majority only covered legal standards for admissibility of expert evidence.

  • Compounding this lack of access was a low degree of demand. None of the responding faculty reported having large lecture courses; in fact, many reported class sizes of fewer than twenty students.

Focus on the future

 

The results of this survey suggest that the 2009 NAS Report’s call for higher standards in forensic science education remain highly relevant and that continuing legal education will be particularly useful to addressing these needs.

In addition to specialty courses in forensics, more general courses in quantitative methods, during and after law school, could provide a better understanding of statistics for future and current lawyers and judges.

There is still much work to be done in order to ensure greater scientific literacy in the legal profession. To quote Jim Dwyer, Barry Scheck, and Peter Neufeld, “A fear of science won’t cut it in an age when many pleas of guilty are predicated on the reports of scientific experts. Every public defender’s office should have at least one lawyer who is not afraid of a test tube.”

Algorithmic Evidence in Criminal Trials

Computer software source code on screen

Guest Blog

Kori Khan
Assistant Professor
Department of Statistics, Iowa State University


 

We are currently in an era where machine learning and algorithms offer novel approaches to solving problems both new and old. Algorithmic approaches are swiftly being adopted for a range of issues: from making hiring decisions for private companies to sentencing criminal defendants. At the same time, researchers and legislators are struggling with how to evaluate and regulate such approaches.

The regulation of algorithmic output becomes simultaneously more complex and pressing in the context of the American criminal justice system. U.S. courts are regularly admitting evidence generated from algorithms in criminal cases. This is perhaps unsurprising given the permissive standards for admission of evidence in American criminal trials. Once admitted, however, the algorithms used to generate the evidence—which are often proprietary or designed for litigation—present a unique challenge. Attorneys and judges face questions about how to evaluate algorithmic output when a person’s liberty hangs in the balance. Devising answers to these questions inevitably involves delving into an increasingly contentious issue—access to the source code.

In criminal courts across the country, it appears most criminal defendants have been denied access to the source code of algorithms used to produce evidence against them. I write, “it appears,” because here, like in most areas of the law, empirical research into legal trends is limited to case studies or observations about cases that have drawn media attention. For these cases, the reasons for denying a criminal defendant access to the source code have not been consistent. Some decisions have pointed out that the prosecution does not own the source code, and therefore is not required to produce it. Others implicitly acknowledge that the prosecution could be required to produce the source code and instead find that the defendant has not shown a need for access to the source code. It is worth emphasizing that these decisions have not found that the defendant does not need access to source code; but rather, that the defendant has failed to sufficiently establish that need. The underlying message in many of these decisions, whether implicit or explicit, is that there will be cases, perhaps quite similar to the case being considered, where a defendant will require access to source code to mount an effective defense. The question of how to handle access to the code in such cases does not have a clear answer.

Legal scholars are scrambling to provide guidance. Loosely speaking, proposals can be categorized into two groups: those that rely on existing legal frameworks and those that suggest a new framework might be necessary. For the former category, the heart of the issue is the tension between the intellectual property rights of the algorithm’s producer and the defendant’s constitutional rights. On the one hand, the producers of algorithms often have a commercial interest in ensuring that competitors do not have access to the source code. On the other hand, criminal defendants have the right to question the weight of the evidence presented in court.

There is a range of opinions on how to balance these competing interests. These opinions run along a spectrum of always allowing defendants access to source code to rarely allowing defendants access to the code. However, most fall somewhere in the middle. Some have suggested “front-end” measures in which lawmakers establish protocols to ensure the accuracy of algorithmic output before their use in criminal courts. These measures might include an escrowing of the source code, similar to how some states have handled voting technology. Within the courtroom, suggestions for protecting the producers of code include utilizing traditional measures, such as the protective orders commonly used in trade secret suits. Other scholars have proposed a defendant might not always need access to source code. For example, some suggest that if the producer of the algorithm is willing to run tests constructed by the defense team, this may be sufficient in many cases. Most of these suggestions make two key assumptions: 1) either legislators or defense attorneys should be able to devise standards to identify the cases for which access to source code is necessary to evaluate an algorithm and 2) legislators or defense attorneys can devise these standards without access to the source code themselves.

These assumptions require legislators and defense attorneys to answer questions that the scientific community itself cannot answer. Outside of the legal setting, researchers are faced with a similar problem: how can we evaluate scientific findings that rely on computational research? For the scientific community, the answer for the moment is that we are not sure. There is evidence that the traditional methods of peer review are inadequate. In response, academic journals and institutes have begun to require that researchers share their source code and any relevant data. This is increasingly viewed as a minimal standard to begin to evaluate computational research, including algorithmic approaches. However, just as within the legal community, the scientific community has no clear answers for how to handle privacy or proprietary interests in the evaluation process.

In the past, forensic science methods used in criminal trials have largely been developed and evaluated outside the purview of the larger scientific community, often on a case-by-case basis. As both the legal and scientific communities face the challenge of regulating algorithms, there is an opportunity to expand existing interdisciplinary forums and create new ones.

Learn about source code in criminal trials by attending the Source Code on Trial Symposium on March 12 at 2:30 to 4 p.m. Register at https://forensicstats.org/source-code-on-trial-symposium/.

 


 

Publications and Websites Used in This Blog:

How AI Can Remove Bias From The Hiring Process And Promote Diversity And Inclusion

Equivant, Northpoint Suite Risk Need Assessments

The Case for Open Computer Programs

Using AI to Make Hiring Decisions? Prepare for EEOC Scrutiny

Source Code, Wikipedia

The People of the State of New York Against Donsha Carter, Defendant

Commonwealth of Pennsylvania Versus Jake Knight, Appellant

The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence

Convicted by Code

Machine Testimony

Elections Code, California Legislative Information

Trade Secret Policy, United States Patent and Trademark Office

Computer Source Code: A Source of the Growing Controversy Over the Reliability of Automated Forensic Techniques

Artificial Intelligence Faces Reproducibility Crisis

Author Guidelines, Journal of the American Statistical Association

Reproducible Research in Computational Science

Insights: Using Mixture Models to Examine Group Differences Among Jurors

INSIGHTS

Using Mixture Models to Examine Group Differences Among Jurors:

An Illustration Involving the Perceived Strength of Forensic Science Evidence

OVERVIEW

It is critically important for jurors to be able to understand forensic evidence,
and just as important to understand how jurors perceive scientific reports.
Researchers have devised a novel approach, using statistical mixture
models, to identify subpopulations that appear to respond differently to
presentations of forensic evidence.

Lead Researchers

Naomi Kaplan-Damary
William C. Thompson
Rebecca Hofstein Grady
Hal S. Stern

Journal

Law, Probability, and Risk

Publication Date

30 January 2021

Publication Number

IN 116 IMPL

Goals

1

Use statistical models to determine if subpopulations exist among samples of mock jurors.

2

Determine if these subpopulations have clear differences in how they perceive forensic evidence.

THE THREE STUDIES

Definition:

Mixture model approach:
a probabilistic model that detects subpopulations within a study population empirically, i.e., without a priori hypotheses about their characteristics.

Results

  • Data from the three studies suggest that subpopulations exist and perceive statements differently.
  • The mixture model approach found subpopulation structures not detected by the hypothesis-driven approach.
  • One of the three studies found participants with higher numeracy tended to respond more strongly to statistical statements, while those with lower numeracy preferred more categorical statements.

higher numeracy

lower numeracy

Focus on the future

 

The existence of group differences in how evidence is perceived suggests that forensic experts need to present their findings in multiple ways. This would better address the full range of potential jurors.

These studies were limited due to relatively small number of participants. A larger study population may allow us to learn more about the nature of population heterogeneity.

In future studies, Kaplan-Damary et al. recommend a greater number of participants and the consideration of a greater number of personal characteristics.

OSAC’s New Process Map Focuses on Firearms Examinations

Overview of the Firearms Process Map.

The Organization of Scientific Area Committees (OSAC) for Forensic Science, in partnership with the Association of Firearm and Tool Mark Examiners (AFTE), has just released a process map that describes the process that most firearms examiners use when analyzing evidence. The Firearms Process Map provides details about the procedures, methods and decision points most frequently encountered in firearms examination.

From the OSAC press release:

“This map can benefit the firearm discipline by providing a behind-the-scenes perspective into the various components and complexities involved in the firearms examination process. It can also be used to identify best practices, reduce errors, assist in training new examiners and highlight areas where further research or standardization would be beneficial.”

The Firearms Process Map was developed by the National Institute of Standards and Technology (NIST) Forensic Science Research Program through a collaboration with OSAC’s Firearms & Toolmarks Subcommittee and the Association of Firearm and Tool Mark Examiners (AFTE).

Additional process maps are available from OSAC, including a Friction Ridge Process Map and Speaker Recognition Process Map.

Read the OSAC press release.