Currently in the field of forensic science there is much international debate regarding the role of forensic evidence data, and the use of corresponding databases, to correctly identify a suspect in a crime. CSAFE recently convened a group of expert researchers at a Carnegie Mellon University workshop to discuss the impact of the limitations of forensic databases.
CSAFE Co-Director and Carnegie Mellon John. C. Warner Professor of Statistics Dr. Bill Eddy emphasized that in the forensic field there are many databases created to assist law enforcement in linking a suspect to a crime. These databases can include fingerprints, bullet casings, automobile paint, and more. However, many of the current forensic databases were developed as a response to law enforcement needs, usually without careful consideration of the statistical and scientific principles associated with use.
“I think many people think that a database is a pile of data, but it is actually organized in some way and typically is supported by a database management system so you can make inquiries,” Eddy said. “However, when you using a database, important statistical issues arise. There is, for example, the size of the database; how will the data represent the population of interest; and what other information the examiner has besides the sample evidence.”
These factors can lead to decreased accuracy when matching crime scene evidence to a suspect. It can also cause disagreements among forensic science professionals about how much certainty an examiner has that the sample evidence matches a database element.
One goal of CSAFE is to build a community of skilled researchers and forensic science partners to address the lack of common understanding and agreement on how to accurately use forensic databases. Our team is committed to continuously engaging in conversation with the forensic science, judicial, and lay communities to identify areas where the field can strengthen the scientific validity of its methods.