CSAFE’s October Webinar will Feature New Black Box Study on Bloodstain Pattern Analysis

Figure 1 from the article shows examples of bloodstain patterns used in the study.

A new study assessing the accuracy and reproducibility of practicing bloodstain pattern analysts’ conclusions will be the focus of an upcoming Center for Statistics and Applications in Forensic Evidence (CSAFE) webinar.

The webinar Bloodstain Pattern Analysis Black Box Study will be held Thursday, Oct. 14 from 11 a.m.–noon CDT. It is free and open to the public.

During the webinar, Austin Hicklin, director at the Forensic Science Group; Paul Kish, a forensic consultant; and Kevin Winer, director at the Kansas City Police Crime Laboratory, will discuss their recently published article, Accuracy and Reproducibility of Conclusions by Forensic Bloodstain Pattern Analysts. The article was published in the August issue of Forensic Science International.

From the Abstract:

Although the analysis of bloodstain pattern evidence left at crime scenes relies on the expert opinions of bloodstain pattern analysts, the accuracy and reproducibility of these conclusions have never been rigorously evaluated at a large scale. We investigated conclusions made by 75 practicing bloodstain pattern analysts on 192 bloodstain patterns selected to be broadly representative of operational casework, resulting in 33,005 responses to prompts and 1760 short text responses. Our results show that conclusions were often erroneous and often contradicted other analysts. On samples with known causes, 11.2% of responses were erroneous. The results show limited reproducibility of conclusions: 7.8% of responses contradicted other analysts. The disagreements with respect to the meaning and usage of BPA terminology and classifications suggest a need for improved standards. Both semantic differences and contradictory interpretations contributed to errors and disagreements, which could have serious implications if they occurred in casework.

The study was supported by a grant from the U.S. National Institute of Justice. Kish and Winer are members of CSAFE’s Research and Technology Transfer Advisory Board.

To register for the October webinar, visit https://forensicstats.org/events.

The CSAFE Fall 2021 Webinar Series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.

CSAFE researchers are also undertaking projects to develop objective analytic approaches to enhance the practice of bloodstain pattern analysis. Learn more about CSAFE’s BPA projects at forensicstats.org/blood-pattern-analysis.

OSAC Public Update Meeting Set for Wednesday, Sept. 29

OSAC LOGO

Plan to attend the Organization of Scientific Area Committees (OSAC) for Forensic Science Public Update Meeting on Sept. 29, 2021, from 1–4:30 p.m. EDT.

This virtual event will feature presentations from the chairs of OSAC’s Forensic Science Standards Board and seven Scientific Area Committees. Each presenter will describe the standards their unit is working on and discuss research gaps, challenges, and priorities for the coming year. Attendees will have the opportunity to ask questions and provide feedback. There is no fee to attend, but registration is required.

OSAC works to strengthen forensic science by facilitating the development of technically sound standards and promoting the use of those standards by the forensic science community. OSAC’s 800-plus members and affiliates draft and evaluate forensic science standards through a transparent, consensus-based process that allows for participation and comment by all stakeholders. For more information about OSAC and its programs, visit https://www.nist.gov/osac.

The meeting agenda and registration information is available on the OSAC website.

OSAC Registry Implementation Survey

OSAC LOGO

The Organization of Scientific Area Committees for Forensic Science (OSAC) is asking forensic science service providers to complete an online survey to understand how organizations are using standards on the OSAC Registry and what support they may need to improve standards implementation.

According to the OSAC Registry Implementation Survey webpage, “OSAC wants to better understand how the standards on the Registry are currently being used, the challenges around standards implementation, and what support is needed to improve it. The OSAC Registry Implementation Survey will be a tool we use to collect this information on an annual basis.”

OSAC says that after the survey closes on Aug. 31, they will analyze the responses, and the results will be published in OSAC’s fall newsletter at the end of October.

Forensic science service providers across the country are encouraged to complete this survey (one response per location). It will take approximately 15-45 minutes to complete and must be done in one sitting.

More information and a link to the survey are available at https://www.nist.gov/osac/osac-registry-implementation-survey.

GAO Releases a Second Report on Forensic Science Algorithms

From GAO Report 21-435
GAO-21-435 — Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes
GAO-21-435 — Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes

In July, the U.S. Government Accountability Office (GAO) released the report, Forensic Technology: Algorithms Strengthen Forensic Analysis, but Several Factors Can Affect Outcomes.

This is the second report in a two-part series of technology assessments responding to a request to examine the use of forensic algorithms in law enforcement. The first report, Forensic Technology: Algorithms Used in Federal Law Enforcement (GAO-20-479SP), described forensic algorithms used by federal law enforcement agencies and how they work.

In this report, GAO conducted an in-depth analysis of three types of algorithms used by federal law enforcement agencies and selected state and local law enforcement agencies: latent print, facial recognition and probabilistic genotyping. The report discusses

  1. the key performance metrics for assessing latent print, facial recognition and probabilistic genotyping algorithms;
  2. the strengths of these algorithms compared to related forensic methods;
  3. the key challenges affecting the use of these algorithms and the associated social and ethical implications; and
  4. options policymakers could consider to address these challenges.

GAO developed three policy options that could help address challenges related to law enforcement use of forensic algorithms. The policy options identify possible actions by policymakers, which may include Congress, other elected officials, federal agencies, state and local governments and industry.

In conducting this assessment, GAO interviewed federal officials, select non-federal law enforcement agencies and crime laboratories, algorithm vendors, academic researchers and nonprofit groups. It also convened an interdisciplinary meeting of 16 experts with assistance from the National Academies of Sciences, Engineering, and Medicine; and reviewed relevant literature. CSAFE co-director Karen Kafadar, professor and chair of statistics at the University of Virginia, participated in the meeting, as well as Will Guthrie, a CSAFE Research and Technology Transfer Advisory Board member. Guthrie is chief of the Statistical Engineering Division at the National Institute of Standards and Technology.

Read More:

Learn More:

CSAFE researchers are developing open-source software tools, allowing for peer-reviewed, transparent software for forensic scientists and researchers to apply to forensic evidence analysis. These automatic matching algorithms provide objective and reproducible scores as a foundation for a fair judicial process. Learn more about CSAFE’s open-source software tools.

NIST Extends Deadline for Comment on Draft Report on DNA Mixture Interpretation Methods

Credit: N. Hanacek/NIST

The National Institute of Standards and Technology (NIST) has extended the deadline for public comments on NIST Internal Report 8351-DRAFT (DNA Mixture Interpretation: A Scientific Foundation Review). The new deadline is Aug. 23, 2021.

This report, currently published in draft form, reviews the methods that forensic laboratories use to interpret evidence containing a mixture of DNA from two or more people. To read more about the report or to submit comments, visit https://www.nist.gov/dna-mixture-interpretation-nist-scientific-foundation-review.

In case you missed it, NIST hosted the webinar, DNA Mixtures: A NIST Scientific Foundation Review. The webinar reviewed the contents of and findings in the NISTIR 8351-draft report, discussed feedback received up to that point in the public comment period, and provided an opportunity for interested parties and stakeholders to ask additional questions or seek clarification on the draft report. The recording of the webinar can be viewed at https://www.nist.gov/news-events/events/2021/07/webinar-dna-mixtures-nist-scientific-foundation-review.

DNA Mixtures: A Forensic Science Explainer
NIST has also published a webpage explaining DNA mixtures and why are they are sometimes difficult to interpret. https://www.nist.gov/feature-stories/dna-mixtures-forensic-science-explainer

Handwriting Examiners in the Digital Age

Forensic handwriting examiners can only compare writing of the same type. In this case, only the second known sample can be compared to the questioned handwriting. Credit: NIST

From National Institute of Standards and Technology (NIST) News
Published June 3, 2021


 

As people write less by hand, will handwriting examination become irrelevant?

NIST considers the answer to that question in a recent news article. NIST suggests the answer is no, but only if the field of forensic handwriting examination changes to keep up with the times.

The article focuses on some of the recommendations from the NIST updated report, Forensic Handwriting Examination and Human Factors: Improving the Practice Through a Systems Approach. It recommends more research to estimate error rates for the field which will allow juries and others to consider the potential for error when weighing an examiner’s testimony.

The report also recommends that experts avoid testifying in absolute terms or saying that an individual has written something to the exclusion of all other writers. Instead, experts should report their findings in terms of relative probabilities and degrees of certainty.

Melissa Taylor, the NIST human factors expert who led the group of authors, said that the report provides the forensic handwriting community with a road map for staying relevant. But the threat of irrelevance doesn’t come only from the decline in handwriting. Part of the challenge, she says, arises from the field of forensic science itself.

“There is a big push toward greater reliability and more rigorous research in forensic science,” said Taylor, whose research is aimed at reducing errors and improving job performance in handwriting examination and other forensic disciplines, including fingerprints and DNA. “To stay relevant, the field of handwriting examination will have to change with the times.”

To read the full article, visit https://www.nist.gov/news-events/news/2021/06/handwriting-examiners-digital-age.

For more information about CSAFE’s handwriting analysis research, visit https://forensicstats.org/handwriting-analysis/.

The CSAFE handwriting team has developed an open-source software tool called handwriter. This R package utilizes a variety of functions to identify letters and features from handwritten documents. Learn more at https://github.com/CSAFE-ISU/handwriter.

DNA Mixture Interpretations: A Q&A With NIST’s John Butler

John Butler with DNA mixture data. Credit: NIST

From Taking Measure, the official blog of the National Institute of Standards and Technology (NIST)
Published July 28, 2021


 

Whether from skin cells, saliva, semen or blood, DNA from a crime scene is often collected and tested in a lab to see if a suspect’s DNA is likely a contributor to that sample or not. But every DNA sample tells a different story, and some samples are easier to interpret than others. The simplest type of DNA profile to interpret is one where the sample includes hundreds of cells from only one person. When two or more people have contributed to a sample, it’s called a DNA mixture. Some mixtures are so complicated that their stories remain a mystery even to the best forensic DNA experts.

John Butler, a Fellow at the National Institute of Standards and Technology (NIST), and a team of authors have recently completed a draft scientific foundation review of the different methods forensic laboratories use to interpret DNA mixtures. The team urged for more interlaboratory participation in studies to demonstrate consistency in methods and the technology used in DNA mixture interpretation, as well as a need for sharing data publicly. In this interview with NIST’s Christina Reed, Butler — who has over 30 years of experience with DNA profiling, is the author of five books on the subject, and has led training workshops on interpreting DNA mixtures — answers some basic questions about the importance of this fast-growing field of forensic science.

Read the full interview at https://www.nist.gov/blogs/taking-measure/dna-mixture-interpretations-qa-nists-john-butler.

Download DNA Mixture Interpretation: A Scientific Foundation Review at https://www.nist.gov/dna-mixture-interpretation-nist-scientific-foundation-review.

NIST held a webinar on DNA mixtures on July 21, 2021. The webinar was recorded and will be made available for on-demand viewing approximately 10 days after the event. For more information, visit https://www.nist.gov/news-events/events/2021/07/webinar-dna-mixtures-nist-scientific-foundation-review.

Reforming the Nation’s Forensic Labs

A crime scene analysis in Chicago. Scott Olson/Getty Images (From Slate.com)

On the heels of the recently reported Massachusetts drug lab scandal, Brandon L. Garrett and Peter Stout published an article on Slate.com calling for a critical look at redirecting funding to crime labs.

In their article, The Worsening Massachusetts Crime Lab Scandal is Just the Beginning, Garrett and Stout write:

Amid a national conversation over where to best allocate law enforcement dollars, we call for a critical look at redirecting funding to other agencies in the criminal justice system that are often forgotten and overlooked—namely, crime labs. Critically, those labs must have sound quality control and adequate resources and must be independent of law enforcement.

They highlight two cases where people were convicted and sentenced to prison based on either evidence from a crime scene contaminated by poorly trained police or flawed testimony by a police examiner. Garrett and Stout noted that these are not isolated cases:

The database Convicting the Innocent, which tracks the role forensics played in cases of people exonerated by DNA, shows that more than half of those 300-plus exonerees, who spent an average of 14 years in prison, were convicted based on flawed forensics.

They concluded their article by stating:

If the criminal justice system is to work properly, it needs access to science that is best supported in well-funded, independent, and scientist-led and -driven crime laboratories. Sound science and justice both demand accurate evidence, which means getting forensics right. And getting forensics right demands reform accountability for our labs, independence, and adequate funding.

Garrett is co-director of the Center for Statistics and Applications in Forensic Evidence (CSAFE) and the L. Neil Williams professor of law at Duke University, where he directs the Wilson Center for Science and Justice. Stout is the CEO and president of the Houston Forensic Science Center. He is also a member of the CSAFE Strategic Advisory Board.

Read Garrett and Stout’s article at Slate.com: The Worsening Massachusetts Crime Lab Scandal is Just the Beginning.

Discover how CSAFE is building strong scientific foundations that enhance forensic science and technology practices. Check out CSAFE’s research areas at https://forensicstats.org/our-research/.

Webinar Q&A: Treatment of Inconclusive Results in Error Rates of Firearms Studies

Bullets

On Feb. 10, the Center for Statistics and Applications in Forensic Evidence (CSAFE) hosted the webinar, Treatment of Inconclusive Results in Error Rates of Firearms Studies. It was presented by Heike Hofmann, a professor and Kingland Faculty Fellow at Iowa State University, Susan VanderPlas, a research assistant professor at the University of Nebraska, Lincoln; and Alicia Carriquiry, CSAFE director and Distinguished Professor and President’s Chair in Statistics at Iowa State.

In the webinar, Hofmann, VanderPlas and Carriquiry revisited several Black Box studies that attempted to estimate the error rates of firearms examiners, investigating their treatment of inconclusive results. During the Q&A portion of the webinar, the presenters ran out of time to answer everyone’s questions. Hofmann, VanderPlas and Carriquiry have combined and rephrased the questions to cover the essential topics that were covered in Q&A. Their answers are below.

If you did not attend the webinar live, the recording is available at https://forensicstats.org/blog/portfolio/treatment-of-inconclusive-results-in-error-rates-of-firearm-studies/.


 

Is the inconclusive rate related to the study difficulty?

There is no doubt that we looked at several studies with different difficulty, as well as different study designs, comparison methods and examiner populations. When we examine the AFTE error rate (so only eliminations of same-source comparisons or identifications of different-source comparisons), compared to the rate of inconclusive decisions, we see that there is a clear difference between the studies conducted in Europe/U.K. and studies conducted in North America.

Figure 1 from February 2021 Q&A Blog

The EU/U.K. studies were conducted to assess lab proficiency (for the most part), and consequently, they seem to have been constructed to be able to distinguish good laboratories from excellent laboratories. So, they do include harder comparisons. The more notable result isn’t the difference in the error rates, which is relatively small; but rather, the largest difference is in the proportion of inconclusives in different-source and same-source comparisons. In the EU/U.K. studies, the proportion of inconclusives is similar for both types of comparisons. In the U.S./CA studies, the proportion of inconclusives for same-source comparisons is a fraction of the proportion of inconclusives for different-source comparisons.

If we think about what the study results should ideally look like, we might come up with something like this:

 

Figure 2 from February 2021 Webinar Q&A. Results of an ideal study
Results of an ideal study.

In this figure, there are many different-source eliminations and same-source identifications. There are equally many same-source and different-source inconclusives, and in both cases, erroneous decisions (same-source exclusions and different-source identifications) are relatively rare. The proportion of inconclusives might be greater or smaller depending on the study difficulty or examiner experience levels, and the proportion of different-source and same-source identifications may be expected to vary somewhat depending on the study design (thus, the line down the center might shift to the left or the right). Ultimately, the entire study can be represented by this type of graphic showing the density of points in each region.

Figure 3 from February 2021 Webinar Q&A. Results visualized for several different studies.
Results visualized for several different studies.

When we look at the results of several studies, we see that none of them conform precisely to this expectation. As expected, the proportion of same-source and different-source decisions vary across the studies (Baldwin includes more different-source comparisons, while Mattijssen includes more same-source comparisons), and the proportion of inconclusive results differs, with more inconclusives in the top three studies relative to Keisler and Duez. However, the most notable difference is that the proportion of inconclusive results for different-source comparisons is much higher than the proportion of inconclusive results for same-source comparisons across studies. This discrepancy is less noticeable but still present for Mattijssen (2020), which was primarily completed by EU-trained examiners. In Baldwin, Keisler and Duez, the proportion of different-source comparisons judged inconclusive makes the inconclusive category appear as an extension of the elimination—the dots have approximately the same density, and the corresponding same-source inconclusive point density is so much lower that it is nearly unnoticeable in comparison.

The real story here seems to be that while more difficult studies do seem to have slightly higher error rates (which is expected), the training, location and lab policies that influence examiner evaluations have a real impact on the proportion of inconclusive decisions which are reached. The EU/U.K. studies provide some evidence for the fact that the bias in inconclusive error rates demonstrated in our paper is a solvable problem.

Examiners often report a final answer of “inconclusive,” and this is correct according to the AFTE standard. Should inconclusives be allowed as a final answer?

From a statistical perspective, there is a mismatch between the state of reality (same source or different source) and the decision categories. This causes some difficulty when calculating error rates. We proposed multiple ways to handle this situation: using predictive probabilities, distinguishing between process error and examiner error or reframing the decision to one of identification or not-identification. Any of these options provide a much clearer interpretation of what an error is and its relevance in legal settings.

In practice, we recognize that not all evidence collected will be suitable to conclude identification or elimination due to several factors. These factors are often considered “process errors,” and examiners are trained to account for these errors and reach an inconclusive decision. We agree that this is a reasonable decision to make based on the circumstances. The issue with inconclusive decisions arises when results are presented in court, all of the errors which could contribute to the process are relevant. Thus, it is important to report the process error and the examiner-based (AFTE) error.

In some cases, however, the examiner may have noted many differences at the individual level but be uncomfortable making an elimination (in some cases, due to lab policies prohibiting elimination based on individual characteristics). That there is hesitation to make this decision is an example of the bias we have demonstrated: when there is some evidence of similarity, examiners appear to be more willing to “bet” on an identification than on an elimination based on a similar amount of dissimilarity. This higher burden of proof is an issue that has consequences for the overall error rates reported from these studies as well as the actual errors that may occur in the legal system itself.

How should courts prevent misused error rates?

The critical component to preventing misuse of error rates is to understand how error rates should be interpreted. Currently, most error rates reported include inconclusives in the denominator but not in the numerator. As we have demonstrated in our paper and this Q&A, this approach leads to error rates that are misleadingly low for the overall identification process and not actionable in a legal situation. Instead, courts should insist on predictive error rates: given the examiner’s decision, what is the probability that it resulted from a same-source or different-source comparison? These probabilities do not rely on inconclusives in the calculations and are relevant to the specific result presented by the examiner in the trial at hand.

What error rate should we use when?

The error rate we want is entirely dependent on the intended use:

  • In court, we should use predictive probabilities because they provide specific information which is relevant to the individual case under consideration.
  • In evaluating examiners, the AFTE error rates, which do not include inconclusives, may be much more useful—they identify examiner errors rather than errors that arise due to situations in which the evidence is recorded and collected. For labs, it is of eminent concern that all of their examiners are adequately trained.

It’s very important to consider the context that an error rate or probability is used and to calculate the error rate which is most appropriate for that context.

Why do you claim AFTE treats inconclusives as correct results?

In the AFTE response to the PCAST report, the response specifically discusses false identifications and false eliminations with no discussion of inconclusive results. Given that this is a foundational dispute, the way that AFTE presents these quantities in other literature is relevant, which is why we will demonstrate the problem with data pulled from AFTE’s resources for error rate calculations.

One of the documents offered by AFTE as a resource for error rates is CTS Results Revisited: A Review and Recalculation of the Peterson and Markham Findings, by Bunch, Stephen.

Table 1 from February 2021 Webinar Q&A

We will use the numbers on the first and second page of this document to illustrate the problem:

Bunch calculates the false-positive error rate as 12/1141 = 1.05% and the false-negative error rate as 17/965 = 1.76%. In both cases, the inconclusive decisions are included in the denominator (total evaluations) and not included in the numerator. This means that when reporting error rates, the inconclusive decisions are never counted as errors—implicitly, they are counted as correct in both cases. While he also reports the sensitivity, specificity and inconclusive rate, none of these terms are labeled as “errors,” which leads to the perception that the error rate for firearms examination is much lower than it should be.

Suppose we exclude inconclusives from the numerator and the denominator. The false-positive error rate using this approach would be 17/923 = 1.84%, and the false-negative error rate would be 12/966 = 1.24%. In both cases, this results in a higher error rate, which demonstrates that the AFTE approach to inconclusives tends to produce misleading results.

The current way errors are reported (when the reporter is being thorough) is to report the percentage of inconclusives in addition to the percentage of false eliminations and false identifications. Unfortunately, when this reporting process is followed, the discrepancy in inconclusive rates between same-source and different-source comparisons is obscured. This hides a significant source of systematic bias.

Are studies representative of casework?

First, we can’t know the answer to this question because we can’t ever know the truth in casework. This is why we have to base everything we know on designed studies because ground truth is known. So, we will never know whether the proportion of e.g., same-source and different-source comparisons in experiments is representative of casework.

What we can know, but do not yet know, is the percentage of decisions that examiners reach that are inconclusive (or identification or eliminations). We are not aware of any studies which report this data for any lab or jurisdiction. As a result, we do not know whether the proportion of inconclusive decisions is similar in casework and designed studies.

What we do know, however, is that the proportion of inconclusives is not constant between studies. In particular, there are much higher inconclusive rates for same-source comparisons in studies conducted in Europe and the U.K. These studies are intended to assess a lab’s skill and are much harder than designed error rate studies in the U.S. So, we know that the design and intent of a study do influence the inconclusive rate. More research in this area is needed.

One possibility for addressing the issue of different examiner behavior in studies versus casework is to implement widespread blind testing—testing in which the examiner is not aware they are participating in a study. The study materials would be set up to mimic evidence and the examiner would write their report as if it was an actual case. This would at least ensure that examiner behavior is similar in the study and the casework. However, this type of study is difficult to design and implement, which explains why it is not commonly done.

In one respect, studies are much harder than casework. In casework, it is much more likely that an elimination can be made on class characteristic mismatches alone. In designed studies, this is often not a scenario that is included. So, designed studies may be harder overall because they often examine consecutively manufactured (and thus more similar) firearms and toolmarks, all of which necessarily have the same class characteristics.

How do you get the predictive probability from a study?

It’s important to note that you can only get the predictive probability from a designed study. This is because you need the proportion of same-source and different-source comparisons as baseline information. These proportions are only known in designed studies and are not at all known in casework. We created a Google worksheet that can help calculate predictive probabilities and the examiner and process error rates. The worksheet is available here.

Why not just exclude inconclusives from all calculations?

One reason is that inconclusive results are still reported in legal settings, but they are not equally likely when examining same-source and different-source evidence, which is informative. Given that an examiner reports an inconclusive result, the source is much more likely to be different than the same. By ignoring inconclusives entirely, we would be throwing out data that is informative. This argument has been made by Biedermann et al., but they did not take the assertion to its obvious conclusion.

What about lab policies that prohibit elimination on individual characteristics?

First, those policies are in direct conflict with the AFTE range of conclusions as published at https://afte.org/about-us/what-is-afte/afte-range-of-conclusions. These guidelines specify “Significant disagreement of discernible class characteristics and/or individual characteristics.” as a reason for an elimination. An interpretation by labs that does not have an elimination based on individual characteristics should be addressed and clarified by AFTE.

Those policies also introduce bias into the examiner’s decision. As can be seen from the rate at which inconclusive results stem from different-source comparisons in case studies, almost all inconclusive results are from different-source comparisons. Some studies controlled for this policy by asking participants to follow the same rules, and even in these studies, the same bias against same-source comparisons being labeled as inconclusive is present. This is true in Bunch and Murphy (conducted at the FBI lab, which has such a policy) and is also true in Baldwin, which was a much larger and more heterogeneous study that requested examiners make eliminations based on individual characteristic mismatches.

A finding of “no identification” could easily be misinterpreted by a non-firearms examiner, such as a juror or attorney, as an elimination.

Under the status quo, an inconclusive can easily be framed as an almost-identification; so, this ambiguity is already present in the system, and we rely on attorneys to frame the issue appropriately for the judge and/or jury. Under our proposal to eliminate inconclusives, we would also have to rely on attorneys to correctly contextualize the information presented by the examiner.

You say that inconclusive results occur more frequently when the conclusion is different-source. Could a conviction occur based on an inconclusive result? Why is this an issue?

Probably not, unless the testimony was something like, “I see a lot of similarities, but not quite enough to establish an identification.” The framing of the inconclusive is important, and at the moment, there is no uniformity in how these results are reported.

Algorithmic Evidence in Criminal Trials

Computer software source code on screen

Guest Blog

Kori Khan
Assistant Professor
Department of Statistics, Iowa State University


 

We are currently in an era where machine learning and algorithms offer novel approaches to solving problems both new and old. Algorithmic approaches are swiftly being adopted for a range of issues: from making hiring decisions for private companies to sentencing criminal defendants. At the same time, researchers and legislators are struggling with how to evaluate and regulate such approaches.

The regulation of algorithmic output becomes simultaneously more complex and pressing in the context of the American criminal justice system. U.S. courts are regularly admitting evidence generated from algorithms in criminal cases. This is perhaps unsurprising given the permissive standards for admission of evidence in American criminal trials. Once admitted, however, the algorithms used to generate the evidence—which are often proprietary or designed for litigation—present a unique challenge. Attorneys and judges face questions about how to evaluate algorithmic output when a person’s liberty hangs in the balance. Devising answers to these questions inevitably involves delving into an increasingly contentious issue—access to the source code.

In criminal courts across the country, it appears most criminal defendants have been denied access to the source code of algorithms used to produce evidence against them. I write, “it appears,” because here, like in most areas of the law, empirical research into legal trends is limited to case studies or observations about cases that have drawn media attention. For these cases, the reasons for denying a criminal defendant access to the source code have not been consistent. Some decisions have pointed out that the prosecution does not own the source code, and therefore is not required to produce it. Others implicitly acknowledge that the prosecution could be required to produce the source code and instead find that the defendant has not shown a need for access to the source code. It is worth emphasizing that these decisions have not found that the defendant does not need access to source code; but rather, that the defendant has failed to sufficiently establish that need. The underlying message in many of these decisions, whether implicit or explicit, is that there will be cases, perhaps quite similar to the case being considered, where a defendant will require access to source code to mount an effective defense. The question of how to handle access to the code in such cases does not have a clear answer.

Legal scholars are scrambling to provide guidance. Loosely speaking, proposals can be categorized into two groups: those that rely on existing legal frameworks and those that suggest a new framework might be necessary. For the former category, the heart of the issue is the tension between the intellectual property rights of the algorithm’s producer and the defendant’s constitutional rights. On the one hand, the producers of algorithms often have a commercial interest in ensuring that competitors do not have access to the source code. On the other hand, criminal defendants have the right to question the weight of the evidence presented in court.

There is a range of opinions on how to balance these competing interests. These opinions run along a spectrum of always allowing defendants access to source code to rarely allowing defendants access to the code. However, most fall somewhere in the middle. Some have suggested “front-end” measures in which lawmakers establish protocols to ensure the accuracy of algorithmic output before their use in criminal courts. These measures might include an escrowing of the source code, similar to how some states have handled voting technology. Within the courtroom, suggestions for protecting the producers of code include utilizing traditional measures, such as the protective orders commonly used in trade secret suits. Other scholars have proposed a defendant might not always need access to source code. For example, some suggest that if the producer of the algorithm is willing to run tests constructed by the defense team, this may be sufficient in many cases. Most of these suggestions make two key assumptions: 1) either legislators or defense attorneys should be able to devise standards to identify the cases for which access to source code is necessary to evaluate an algorithm and 2) legislators or defense attorneys can devise these standards without access to the source code themselves.

These assumptions require legislators and defense attorneys to answer questions that the scientific community itself cannot answer. Outside of the legal setting, researchers are faced with a similar problem: how can we evaluate scientific findings that rely on computational research? For the scientific community, the answer for the moment is that we are not sure. There is evidence that the traditional methods of peer review are inadequate. In response, academic journals and institutes have begun to require that researchers share their source code and any relevant data. This is increasingly viewed as a minimal standard to begin to evaluate computational research, including algorithmic approaches. However, just as within the legal community, the scientific community has no clear answers for how to handle privacy or proprietary interests in the evaluation process.

In the past, forensic science methods used in criminal trials have largely been developed and evaluated outside the purview of the larger scientific community, often on a case-by-case basis. As both the legal and scientific communities face the challenge of regulating algorithms, there is an opportunity to expand existing interdisciplinary forums and create new ones.

Learn about source code in criminal trials by attending the Source Code on Trial Symposium on March 12 at 2:30 to 4 p.m. Register at https://forensicstats.org/source-code-on-trial-symposium/.

 


 

Publications and Websites Used in This Blog:

How AI Can Remove Bias From The Hiring Process And Promote Diversity And Inclusion

Equivant, Northpoint Suite Risk Need Assessments

The Case for Open Computer Programs

Using AI to Make Hiring Decisions? Prepare for EEOC Scrutiny

Source Code, Wikipedia

The People of the State of New York Against Donsha Carter, Defendant

Commonwealth of Pennsylvania Versus Jake Knight, Appellant

The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence

Convicted by Code

Machine Testimony

Elections Code, California Legislative Information

Trade Secret Policy, United States Patent and Trademark Office

Computer Source Code: A Source of the Growing Controversy Over the Reliability of Automated Forensic Techniques

Artificial Intelligence Faces Reproducibility Crisis

Author Guidelines, Journal of the American Statistical Association

Reproducible Research in Computational Science