The American Society of Crime Laboratory Directors (ASCLD) Forensic Research Committee (FRC) works to identify the research, development, technology and evaluation needs and priorities for the forensic science community. The FRC has several initiatives and resources available on its website to aid both researchers and practitioners. Below, we highlight a few of those resources.
The FRC collaboration hub hosts the Researcher-Practitioner Collaboration Directory. The directory helps connect researchers with ongoing projects to practitioners who are willing to participate in the studies. The searchable directory includes descriptions of each project, including an abstract and the estimated participant time involved. Researchers can easily submit their projects for inclusion in the directory by completing an online form.
The FRC hosts a virtual “Lightning Talks” series to highlight new and emerging research in all areas of forensic science. Each episode features three short talks given by practitioners, researchers or students. Previous Lightning Talks are archived on FRC’s YouTube page.
Laboratories and Educators Alliance Program (LEAP)
LEAP facilitates collaborative research between academia and forensic science laboratories. This program identifies forensic science needs and provides a platform for laboratories, researchers and students to seek projects aligning with their mutual research capabilities. The FRC website includes a map of LEAP partners, a short video explaining LEAP and sign-up forms for crime labs and universities. LEAP is a joint effort between ASCLD and the Council of Forensic Science Educators (COFSE).
Validation and Evaluation Repository
The Validation and Evaluation Repository is a list of unique validations and evaluations conducted by forensic labs and universities. ASCLD’s summary of the repository states, “It is ASCLD’s hope that this listing will foster communication and reduce unnecessary repetition of validations and evaluations to benefit the forensic community.” The searchable repository is available at https://www.ascld.org/validation-evaluation-repository/
Research Executive Summaries
The Future Forensics Subcommittee of the FRC has initiated the publication of brief executive summaries of the recent literature within the forensic sciences. The summaries are written by ASCLD members and are meant to provide a brief overview of noteworthy publications and trends in the literature. Currently, the summaries include reviews in the areas of fingermarks, controlled substances, paint and glass evidence, forensic toxicology, forensic biology, gunshot residue analysis and firearms and toolmarks.
A new study assessing the accuracy and reproducibility of practicing bloodstain pattern analysts’ conclusions will be the focus of an upcoming Center for Statistics and Applications in Forensic Evidence (CSAFE) webinar.
Although the analysis of bloodstain pattern evidence left at crime scenes relies on the expert opinions of bloodstain pattern analysts, the accuracy and reproducibility of these conclusions have never been rigorously evaluated at a large scale. We investigated conclusions made by 75 practicing bloodstain pattern analysts on 192 bloodstain patterns selected to be broadly representative of operational casework, resulting in 33,005 responses to prompts and 1760 short text responses. Our results show that conclusions were often erroneous and often contradicted other analysts. On samples with known causes, 11.2% of responses were erroneous. The results show limited reproducibility of conclusions: 7.8% of responses contradicted other analysts. The disagreements with respect to the meaning and usage of BPA terminology and classifications suggest a need for improved standards. Both semantic differences and contradictory interpretations contributed to errors and disagreements, which could have serious implications if they occurred in casework.
The study was supported by a grant from the U.S. National Institute of Justice. Kish and Winer are members of CSAFE’s Research and Technology Transfer Advisory Board.
The CSAFE Fall 2021 Webinar Series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.
CSAFE researchers are also undertaking projects to develop objective analytic approaches to enhance the practice of bloodstain pattern analysis. Learn more about CSAFE’s BPA projects at forensicstats.org/blood-pattern-analysis.
Plan to attend the Organization of Scientific Area Committees (OSAC) for Forensic Science Public Update Meeting on Sept. 29, 2021, from 1–4:30 p.m. EDT.
This virtual event will feature presentations from the chairs of OSAC’s Forensic Science Standards Board and seven Scientific Area Committees. Each presenter will describe the standards their unit is working on and discuss research gaps, challenges, and priorities for the coming year. Attendees will have the opportunity to ask questions and provide feedback. There is no fee to attend, but registration is required.
OSAC works to strengthen forensic science by facilitating the development of technically sound standards and promoting the use of those standards by the forensic science community. OSAC’s 800-plus members and affiliates draft and evaluate forensic science standards through a transparent, consensus-based process that allows for participation and comment by all stakeholders. For more information about OSAC and its programs, visit https://www.nist.gov/osac.
The meeting agenda and registration information is available on the OSAC website.
On Feb. 10, the Center for Statistics and Applications in Forensic Evidence (CSAFE) hosted the webinar, Treatment of Inconclusive Results in Error Rates of Firearms Studies. It was presented by Heike Hofmann, a professor and Kingland Faculty Fellow at Iowa State University, Susan VanderPlas, a research assistant professor at the University of Nebraska, Lincoln; and Alicia Carriquiry, CSAFE director and Distinguished Professor and President’s Chair in Statistics at Iowa State.
In the webinar, Hofmann, VanderPlas and Carriquiry revisited several Black Box studies that attempted to estimate the error rates of firearms examiners, investigating their treatment of inconclusive results. During the Q&A portion of the webinar, the presenters ran out of time to answer everyone’s questions. Hofmann, VanderPlas and Carriquiry have combined and rephrased the questions to cover the essential topics that were covered in Q&A. Their answers are below.
Is the inconclusive rate related to the study difficulty?
There is no doubt that we looked at several studies with different difficulty, as well as different study designs, comparison methods and examiner populations. When we examine the AFTE error rate (so only eliminations of same-source comparisons or identifications of different-source comparisons), compared to the rate of inconclusive decisions, we see that there is a clear difference between the studies conducted in Europe/U.K. and studies conducted in North America.
The EU/U.K. studies were conducted to assess lab proficiency (for the most part), and consequently, they seem to have been constructed to be able to distinguish good laboratories from excellent laboratories. So, they do include harder comparisons. The more notable result isn’t the difference in the error rates, which is relatively small; but rather, the largest difference is in the proportion of inconclusives in different-source and same-source comparisons. In the EU/U.K. studies, the proportion of inconclusives is similar for both types of comparisons. In the U.S./CA studies, the proportion of inconclusives for same-source comparisons is a fraction of the proportion of inconclusives for different-source comparisons.
If we think about what the study results should ideally look like, we might come up with something like this:
In this figure, there are many different-source eliminations and same-source identifications. There are equally many same-source and different-source inconclusives, and in both cases, erroneous decisions (same-source exclusions and different-source identifications) are relatively rare. The proportion of inconclusives might be greater or smaller depending on the study difficulty or examiner experience levels, and the proportion of different-source and same-source identifications may be expected to vary somewhat depending on the study design (thus, the line down the center might shift to the left or the right). Ultimately, the entire study can be represented by this type of graphic showing the density of points in each region.
When we look at the results of several studies, we see that none of them conform precisely to this expectation. As expected, the proportion of same-source and different-source decisions vary across the studies (Baldwin includes more different-source comparisons, while Mattijssen includes more same-source comparisons), and the proportion of inconclusive results differs, with more inconclusives in the top three studies relative to Keisler and Duez. However, the most notable difference is that the proportion of inconclusive results for different-source comparisons is much higher than the proportion of inconclusive results for same-source comparisons across studies. This discrepancy is less noticeable but still present for Mattijssen (2020), which was primarily completed by EU-trained examiners. In Baldwin, Keisler and Duez, the proportion of different-source comparisons judged inconclusive makes the inconclusive category appear as an extension of the elimination—the dots have approximately the same density, and the corresponding same-source inconclusive point density is so much lower that it is nearly unnoticeable in comparison.
The real story here seems to be that while more difficult studies do seem to have slightly higher error rates (which is expected), the training, location and lab policies that influence examiner evaluations have a real impact on the proportion of inconclusive decisions which are reached. The EU/U.K. studies provide some evidence for the fact that the bias in inconclusive error rates demonstrated in our paper is a solvable problem.
Examiners often report a final answer of “inconclusive,” and this is correct according to the AFTE standard. Should inconclusives be allowed as a final answer?
From a statistical perspective, there is a mismatch between the state of reality (same source or different source) and the decision categories. This causes some difficulty when calculating error rates. We proposed multiple ways to handle this situation: using predictive probabilities, distinguishing between process error and examiner error or reframing the decision to one of identification or not-identification. Any of these options provide a much clearer interpretation of what an error is and its relevance in legal settings.
In practice, we recognize that not all evidence collected will be suitable to conclude identification or elimination due to several factors. These factors are often considered “process errors,” and examiners are trained to account for these errors and reach an inconclusive decision. We agree that this is a reasonable decision to make based on the circumstances. The issue with inconclusive decisions arises when results are presented in court, all of the errors which could contribute to the process are relevant. Thus, it is important to report the process error and the examiner-based (AFTE) error.
In some cases, however, the examiner may have noted many differences at the individual level but be uncomfortable making an elimination (in some cases, due to lab policies prohibiting elimination based on individual characteristics). That there is hesitation to make this decision is an example of the bias we have demonstrated: when there is some evidence of similarity, examiners appear to be more willing to “bet” on an identification than on an elimination based on a similar amount of dissimilarity. This higher burden of proof is an issue that has consequences for the overall error rates reported from these studies as well as the actual errors that may occur in the legal system itself.
How should courts prevent misused error rates?
The critical component to preventing misuse of error rates is to understand how error rates should be interpreted. Currently, most error rates reported include inconclusives in the denominator but not in the numerator. As we have demonstrated in our paper and this Q&A, this approach leads to error rates that are misleadingly low for the overall identification process and not actionable in a legal situation. Instead, courts should insist on predictive error rates: given the examiner’s decision, what is the probability that it resulted from a same-source or different-source comparison? These probabilities do not rely on inconclusives in the calculations and are relevant to the specific result presented by the examiner in the trial at hand.
What error rate should we use when?
The error rate we want is entirely dependent on the intended use:
In court, we should use predictive probabilities because they provide specific information which is relevant to the individual case under consideration.
In evaluating examiners, the AFTE error rates, which do not include inconclusives, may be much more useful—they identify examiner errors rather than errors that arise due to situations in which the evidence is recorded and collected. For labs, it is of eminent concern that all of their examiners are adequately trained.
It’s very important to consider the context that an error rate or probability is used and to calculate the error rate which is most appropriate for that context.
Why do you claim AFTE treats inconclusives as correct results?
In the AFTE response to the PCAST report, the response specifically discusses false identifications and false eliminations with no discussion of inconclusive results. Given that this is a foundational dispute, the way that AFTE presents these quantities in other literature is relevant, which is why we will demonstrate the problem with data pulled from AFTE’s resources for error rate calculations.
We will use the numbers on the first and second page of this document to illustrate the problem:
Bunch calculates the false-positive error rate as 12/1141 = 1.05% and the false-negative error rate as 17/965 = 1.76%. In both cases, the inconclusive decisions are included in the denominator (total evaluations) and not included in the numerator. This means that when reporting error rates, the inconclusive decisions are never counted as errors—implicitly, they are counted as correct in both cases. While he also reports the sensitivity, specificity and inconclusive rate, none of these terms are labeled as “errors,” which leads to the perception that the error rate for firearms examination is much lower than it should be.
Suppose we exclude inconclusives from the numerator and the denominator. The false-positive error rate using this approach would be 17/923 = 1.84%, and the false-negative error rate would be 12/966 = 1.24%. In both cases, this results in a higher error rate, which demonstrates that the AFTE approach to inconclusives tends to produce misleading results.
The current way errors are reported (when the reporter is being thorough) is to report the percentage of inconclusives in addition to the percentage of false eliminations and false identifications. Unfortunately, when this reporting process is followed, the discrepancy in inconclusive rates between same-source and different-source comparisons is obscured. This hides a significant source of systematic bias.
Are studies representative of casework?
First, we can’t know the answer to this question because we can’t ever know the truth in casework. This is why we have to base everything we know on designed studies because ground truth is known. So, we will never know whether the proportion of e.g., same-source and different-source comparisons in experiments is representative of casework.
What we can know, but do not yet know, is the percentage of decisions that examiners reach that are inconclusive (or identification or eliminations). We are not aware of any studies which report this data for any lab or jurisdiction. As a result, we do not know whether the proportion of inconclusive decisions is similar in casework and designed studies.
What we do know, however, is that the proportion of inconclusives is not constant between studies. In particular, there are much higher inconclusive rates for same-source comparisons in studies conducted in Europe and the U.K. These studies are intended to assess a lab’s skill and are much harder than designed error rate studies in the U.S. So, we know that the design and intent of a study do influence the inconclusive rate. More research in this area is needed.
One possibility for addressing the issue of different examiner behavior in studies versus casework is to implement widespread blind testing—testing in which the examiner is not aware they are participating in a study. The study materials would be set up to mimic evidence and the examiner would write their report as if it was an actual case. This would at least ensure that examiner behavior is similar in the study and the casework. However, this type of study is difficult to design and implement, which explains why it is not commonly done.
In one respect, studies are much harder than casework. In casework, it is much more likely that an elimination can be made on class characteristic mismatches alone. In designed studies, this is often not a scenario that is included. So, designed studies may be harder overall because they often examine consecutively manufactured (and thus more similar) firearms and toolmarks, all of which necessarily have the same class characteristics.
How do you get the predictive probability from a study?
It’s important to note that you can only get the predictive probability from a designed study. This is because you need the proportion of same-source and different-source comparisons as baseline information. These proportions are only known in designed studies and are not at all known in casework. We created a Google worksheet that can help calculate predictive probabilities and the examiner and process error rates. The worksheet is available here.
Why not just exclude inconclusives from all calculations?
One reason is that inconclusive results are still reported in legal settings, but they are not equally likely when examining same-source and different-source evidence, which is informative. Given that an examiner reports an inconclusive result, the source is much more likely to be different than the same. By ignoring inconclusives entirely, we would be throwing out data that is informative. This argument has been made by Biedermann et al., but they did not take the assertion to its obvious conclusion.
What about lab policies that prohibit elimination on individual characteristics?
First, those policies are in direct conflict with the AFTE range of conclusions as published at https://afte.org/about-us/what-is-afte/afte-range-of-conclusions. These guidelines specify “Significant disagreement of discernible class characteristics and/or individual characteristics.” as a reason for an elimination. An interpretation by labs that does not have an elimination based on individual characteristics should be addressed and clarified by AFTE.
Those policies also introduce bias into the examiner’s decision. As can be seen from the rate at which inconclusive results stem from different-source comparisons in case studies, almost all inconclusive results are from different-source comparisons. Some studies controlled for this policy by asking participants to follow the same rules, and even in these studies, the same bias against same-source comparisons being labeled as inconclusive is present. This is true in Bunch and Murphy (conducted at the FBI lab, which has such a policy) and is also true in Baldwin, which was a much larger and more heterogeneous study that requested examiners make eliminations based on individual characteristic mismatches.
A finding of “no identification” could easily be misinterpreted by a non-firearms examiner, such as a juror or attorney, as an elimination.
Under the status quo, an inconclusive can easily be framed as an almost-identification; so, this ambiguity is already present in the system, and we rely on attorneys to frame the issue appropriately for the judge and/or jury. Under our proposal to eliminate inconclusives, we would also have to rely on attorneys to correctly contextualize the information presented by the examiner.
You say that inconclusive results occur more frequently when the conclusion is different-source. Could a conviction occur based on an inconclusive result? Why is this an issue?
Probably not, unless the testimony was something like, “I see a lot of similarities, but not quite enough to establish an identification.” The framing of the inconclusive is important, and at the moment, there is no uniformity in how these results are reported.
The Organization of Scientific Area Committees (OSAC) for Forensic Science, in partnership with the Association of Firearm and Tool Mark Examiners (AFTE), has just released a process map that describes the process that most firearms examiners use when analyzing evidence. The Firearms Process Map provides details about the procedures, methods and decision points most frequently encountered in firearms examination.
From the OSAC press release:
“This map can benefit the firearm discipline by providing a behind-the-scenes perspective into the various components and complexities involved in the firearms examination process. It can also be used to identify best practices, reduce errors, assist in training new examiners and highlight areas where further research or standardization would be beneficial.”
The Firearms Process Map was developed by the National Institute of Standards and Technology (NIST) Forensic Science Research Program through a collaboration with OSAC’s Firearms & Toolmarks Subcommittee and the Association of Firearm and Tool Mark Examiners (AFTE).
The exciting role of forensic scientist combines the power of observation, inference and research-based analysis to fight crime. From identifying the time of death to taking a closer look at fingerprints found at the scene, these scientists play an essential role in forensic examinations and linking suspects to specific evidence.
The expert training and education of different types of forensic scientists is key to the investigation process and trial proceedings. Are you interested in joining the field? The U.S. Bureau of Labor Statistics anticipates jobs for forensic scientists will grow at twice the anticipated rate for other occupations, with a 17 percent increase between 2016 and 2026.
Tips on Preparing to Become a Forensic Scientist
A forensic science job requires a minimum of a four-year bachelor’s degree in a field such as biology, chemistry or forensic science. Professionals recommend students seek out the following educational experiences to prepare for futures as a forensic investigator.
Search for a program with a strong academic core in natural sciences and math like biochemistry, toxicology, analytical chemistry and instrumental analysis.
Obtain a thorough grounding in laboratory procedures and the use of scientific instruments.
Build technical skills by taking courses in criminal justice, evidence handling and ethics.
Get acquainted with the criminal justice system and its processes through courses in criminology.
Develop strong written and oral communication skills to improve dialogue with law enforcement or explain findings to a judge and jury.
Seek out opportunities to gain additional hands-on experience through forensic science-related internships.
A Sneak Peek at an Advanced Degree
Students interested in jobs such as laboratory directors, professors or a specialist role can pursue advanced degrees. During a graduate program, you can choose a specialty such as ballistics, digital evidence or toxicology. In addition to classwork, master’s and Ph.D. students develop advanced skills in the laboratory.
A Look at Continuing Education and Certifications
Education for the forensic scientist continues after the job begins with additional employer training. Certifications in various specialties such as blood pattern analysis, forensic photography and latent print analysis are available from organizations such as the International Association for Identification.
Impacting Society With A Career In Forensics
CSAFE offers students interested in pursuing forensic science careers the opportunity to discover how statistics apply to forensic evidence analysis. Learn more about our hands-on experiences for graduate and undergraduate students on our Forensic Education page and see how one student’s CSAFE research is preparing him for his dream job of DNA analyst.
Forensic science is a rigorous and demanding subject, but students committed to academic work and practical experience can stand out amongst other job applicants. Students can look forward to a gratifying career that contributes to the fair administration of justice.
OSAC Registry standards define minimum requirements, best practices, scientific protocols and other guidance to help ensure that the results of forensic analysis are reliable and reproducible.
OSAC, through the National Institute of Standards and Technology (NIST), has entered into a contract with ASTM International that gives 30,000 public criminal justice agencies free access to standards published under ASTM Technical Committee E30 on Forensic Science. To access these standards, click the green “ASTM Standards Access” button on OSAC’s Access to Standards webpage to enter the ASTM Compass website.
How often have you sat in a classroom, half-listening to a teacher lecture, feeling uninterested and not engaged? Many of us can relate to this type of often boring, idle instruction.
While some students may find long lectures on forensic statistics dry, a flipped-classroom approach can make statistics and its real-world applications to criminal investigations exciting and fun for all students. Try using this method to transform students from simply receptors of information to active participants in their learning.
Understanding the Flipped Classroom
Focusing on collaborative learning rooted in higher student-control, the flipped classroom reverses the traditional model of teaching to focus class-time not on lectures, but on problem-solving, projects and discussions.
Dr. Simon Cole, CSAFE researcher and professor at the University of California, Irvine is a strong believer in the flipped classroom, using it to teach his undergraduate course “Forensic Science, Law and Society.” He explains the theory behind its success.
“The flipped classroom is the idea that currently we use the classroom for content delivery such as lecturing or video and expect the students to solve problems at home through homework, writing exercises or tests. The flipped classroom says that’s exactly wrong,” said Cole. “The thing about lectures is that it’s so passive. Students aren’t required to do anything. Lecturing is not the best way for students to learn, they need to be active with group exercises and solving problems.”
The flipped classroom first exposes students to new material outside of class through reading or lecture videos. Class time then becomes an opportunity for students to process what they’ve learned through time for inquiry and application with immediate feedback from peers and the instructor.
Advantages to Flipped Classroom Learning
Promotes deeper learning through in-class opportunities that emphasize higher-level cognitive functions
Gives students more control through freedom to learn at their own pace at home
Increases student-centered learning and collaboration where students teach and learn concepts from each other with instructor guidance
Enables instructors to better assess student understanding and adjust teaching methods accordingly
General Strategies to Implement
Cole paints a picture of the traditional classroom. “You have a huge room of students where 10 percent sit in the front and are working hard, 20 percent are in the back shopping on their computers, and many in the middle who are kind of confused and having trouble keeping up with the ones in the front,” he said. Cole recognizes that as a lecturer it’s hard to keep yourself from teaching only to the students in the front. But the flipped classroom changes that.
Cole recommends that instructors move about the room, asking a variety of students questions. He suggests potentially cold calling on students and staying with them until they answer the question.
“I force myself to not let all the students in the front answer the questions and I learned much more about what the students were comprehending,” Cole said. “What I learned was that there were things that I had taught and lectured about for 10 minutes and it took them 2 or 3 weeks to understand that concept so I slowed down and didn’t move on until I called on someone at random, and they could explain it back to me.”
Another suggestion is to shorten lectures. “All the studies say that no one can listen for more than 12 minutes, not even professors,” Cole said. He advises chunking his lectures into mini-lectures via podcast, allowing students more time to digest the material.
Specific applications of the Flipped Classroom to Forensic Statistics
“The flipped classroom works well with teaching forensic statistics because you can focus on statistical problems and slowly work through them together,” Cole said. “I have some very simple exercises with likelihood ratios such as “state the probability that you think it’s going to rain tomorrow and state how much you hate getting wet. Now show that you can plug those numbers into this equation and that will tell you you’re likelihood ratio and your utility function.” Through this hands-on approach, students are actively interacting with key statistical concepts, promoting a deeper understanding of its importance to criminal investigations.
CSAFE is committed to mentoring the next generation of forensic scientists, researchers, law enforcement agents, lawyers, judges and more. Our innovative approaches to enhancing education for students across the country are paving the way for new talent committed to the fair administration of justice.
The CSAFE undergraduate course “Forensic Science, Statistics and Law” will be taught again in Fall 2018. Cole is looking forward to utilizing the new UCI Anteater Learning Pavilion, featuring seating designed to facilitate flipped classroom collaborative work.
Learn more in the CSAFE news section, and explore additional undergraduate opportunities in our education center. For questions on how you can implement these techniques in your classroom, please don’t hesitate to contact us.
One of the core research areas at CSAFE is training and education. The next generation of forensic scientists, laboratory technicians and other practitioners in the forensic community are vital to the innovation and integrity of the forensic field and to our country’s justice overall. We are proud to be partnered with institutions that are preparing this next generation of forensic scientists, and we look forward to working in collaboration and spreading the advancement of forensic sciences from practicing forensic scientists to those aspiring to join the field.
Albany State University offers a Bachelor of Science in Forensic Science within a challenging academic environment. Its program is accredited by the Forensic Science Education Program Accreditation Commission of the American Academy of Forensic Sciences. Students receive fundamental scientific knowledge, laboratory and analytical skills, communication skills and ethical principles through their studies and hands-on experiences. The program integrates research activities focused on forensic areas such as programming, data analysis, real-time instrument control and project planning.
Students should consider the forensic BS program at Albany State University if they are seeking careers in:
In collaboration with CSAFE, Albany State University forensic science faculty are working to elevate their program to even greater heights and to advance the overall level of forensics education in the country.
The Bachelor of Science program in Forensic Science at Fayetteville State University helps students develop their technical skills, and their basic foundational science and laboratory problem-solving skills. The program touches on areas such as DNA analysis, forensic chemistry and trace evidence. Students will also learn how to prepare reports, document their findings and their laboratory techniques, and communicate their findings. Upon graduation, students are prepared to contribute to a modern crime laboratory — as well as help advance the forensic science community overall.
Students can choose between one of two concentrations:
In collaboration with CSAFE, the professors and faculty in the forensics program at Fayetteville State University aspire to provide their students with the latest training in forensic techniques and models, and to foster innovation among its students.
The forensic science program offered at Eastern New Mexico University develops the technical talents of students as well as their ability to critically think and actively synthesize information. The program adheres to the Forensic Science Education Programs Accreditation Commission standards. Students will learn from practitioners at the local, state and national levels, and they will gain both theoretical and practical experience within classrooms and laboratories.
Within the Bachelor of Science in Forensic Science program, students can concentrate their studies in one of three emphasis areas:
The school has plans to add digital forensics in the near future, and its collaboration with CSAFE will help it better achieve its goal by staying on the cutting edge of digital forensic practices.
Does your school’s forensic science program have a passion for the advancement of the forensic science industry? Do you want to not only stay on the cutting edge of forensic technology — but help lead it? Contact CSAFE today.