The Center for Statistics and Applications in Forensic Evidence (CSAFE) has announced four webinars for its spring 2022 webinar series. The series begins on Feb. 17 and will continue through May 10. The webinars will give participants an in-depth look at current forensic science research.
CSAFE is also offering a short course on the concepts and practices of statistical sampling. Sampling for Forensic Practitioners is a three-part short course that focuses on populations, sampling frames, sampling methods and geometric sampling. The emphasis will be on understanding how these methods are used to aid and enhance current forensic science practices. The course meets 12–2 p.m. central on March 25, April 1 and April 8. The two-hour sessions will be led by Alicia Carriquiry, CSAFE director and Distinguished Professor and President’s Chair in statistics at Iowa State University.
The webinars and short course are free and open to the public, but researchers, collaborators and members of the broader forensics and statistics communities are encouraged to attend. Each 60-minute webinar will allow for discussion and questions.
To register for the webinars or short course, visit https://forensicstats.org/events. Each event will be recorded and available for viewing later at https://forensicstats.org. For questions or more information on the series, contact csafe@iastate.edu.
The spring webinar series is sponsored by the National Institute of Standards and Technology (NIST) through cooperative agreement 70NANB20H019.
WEBINAR TOPICS AND DATES
Improving Forensic Decision Making: A Human-Cognitive Perspective
Feb. 17, 12–1 p.m. CST
Itiel Dror
Cognitive Neuroscience Researcher, University College London
Humans play a critical role in forensic decision making. Drawing upon classic cognitive and psychological research on factors that influence and underpin expert decision making, this webinar will show the weaknesses and vulnerabilities in forensic decision making. Dror will also propose a broad and versatile approach to strengthening forensic expert decisions.
Modeling And iNventory of Tread Impression System (MANTIS): The Development, Deployment and Application of an Active Footwear Data Collection System
March 24, 11 a.m.–noon CDT
Richard Stone
Associate Professor, Iowa State University
Susan Vanderplas
Research Assistant Professor, University of Nebraska–Lincoln
Stone and Vanderplas will detail the development, capabilities and successful deployment of the Modeling And iNventory of Tread Impression (MANTIS) system. The MANTIS Optics Scanner takes real-time video of a gait as the shoe comes in contact with the cover plate. It synchronizes a series of video cameras to create a detailed image of the shoe that can later be processed by software such as Sift + Ransac to create the tread pattern for comparison. The cameras capture between 8 to 15 megapixels. The use of video optics is expandable to utilize the laser scanning option, but for the current utilization, Stone and Vanderplas focused on optical capture, thus allowing for tread capture during dynamic movement (i.e., a person walking or running across the system).
Shining a Light on Black Box Studies
April, 22, 11 a.m.–noon CDT
Kori Khan
Assistant Professor, Department of Statistics, Iowa State University
Alicia Carriquiry
Director of CSAFE
Distinguished Professor and President’s Chair in Statistics, Iowa State University
Khan and Carriquiry will explore how error rates for pattern comparison disciplines are being estimated in black box studies. They will show how the design and analysis of black box studies may be producing misleading results. Drawing on insights gained from looking at actual black box studies, Khan and Carriquiry will propose a set of minimum standards that could help make more reliable estimates of error rates.
Extracting Case-specific Information from Validation Studies
May 10, 11 a.m.–noon CDT
Steve Lund
Mathematical Statistician, National Institute of Standards and Technology, Statistical Engineering Division
Hari Iyer
Mathematical Statistician, National Institute of Standards and Technology, Statistical Engineering Division
Forensic disciplines often summarize validation studies using average error rates. However, almost every forensic discipline has factors that affect the difficulty of a given case (e.g., quantity and quality of a questioned impression or sample), and average performance metrics fail to reflect the difficulty of the current case. This talk presents an approach to characterize the information a set of validation data provides about method performance in a given case.