(Above Image: In 1975, Congress enacted Federal Rule of Evidence 702, which was intended to simplify and liberalize the admission of expert testimony.) http://www.interfire.org/features/fsi_daubert_challenge.asp
Controversy Regarding the Reliability of Toolmark Analysis
On one hand, tool mark analysis is entirely reliable. This is according to The Science of Firearm & Tool mark Identification, which is based on two fundamental propositions. The first preposition says “Tool marks imparted to objects by different tools will rarely if ever display agreement sufficient to lead a qualified examiner to conclude the objects were marked by the same tool. That is, a qualified examiner will rarely if ever commit a false positive error (misidentification).” In other words, this quotes that the examination can never have a misidentification. The argument is that it is too unlikely that tools will cause marks with enough contrast to confuse a professional.
On the other hand, it has been argued that tool mark examination can and does meet the criteria set forth by Daubert (the American standard for admissibility). Many attorneys have sought to have the examiner’s testimony omitted from cases claiming that the examinations are not rooted solidly in science or even that the examiner’s conclusions are subjective and cannot be trusted, which is a limitation entirely contradicting the first preposition. But it is fair to say that a scientific foundation and objectivity are found in any experienced toolmark examiner’s toolmark comparisons. In recent years, to reinforce these ideas and resolve this issue, different groups have sought to make objective toolmark comparisons with the use of comparative statistical algorithms.
Even still, a recent article was published in The Columbia Science and Technology Law Review entitled “A Systemic Challenge to the Reliability and Admissibility of Firearms and Toolmark Identification.” The author, Dr. Adina Schwartz, is an Associate Professor with the John Jay College of Criminal Justice and the Graduate Center, City University of New York. Dr. Schwartz argues that “all firearms and tool mark identifications should be excluded until the development of firm statistical empirical foundations for identification and a rigorous regime of blind proficiency testing.” She discusses the scientific issues related to firearms and tool mark identification including the types of tool marks (class, subclass, individual) and three major sources of misidentification. These include; individual characteristics that are comprised of non-unique marks, subclass characteristics may be confused with individual characteristics and individual marks of a particular tool change over time. It has also been argued that fundamental problems not cured by development of “computerized firearms database”
In 2009, the National Academies published the report "Strengthening Forensic Science in the United States: A Path Forward," which called into question the objectivity of conclusions based on visual toolmark identification by examiners. A major concern is the lack of precisely defined and scientifically justified protocols. Researchers Alan Zheng and Johannes Soons of the Summer Undergraduate Research Fellowship have responded to this criticism by seeking to strengthen the scientific basis of the toolmark identification process through the use of mathematically objective similarity metrics applied to direct measurements of the surface topography. The work builds on research by the Surface and Nanostructure Metrology Group and the Law Enforcement Standards on forensic firearm identification using toolmarks found on bullets and cartridge cases. Identifying a particular tool from a pool of consecutively manufactured tools is a challenging scenario, as the tools are more likely to have similarities in their geometry. Instead of continuing to use the method in which comparisons are completed by examiners Zheng designed an experiment where the tool motion, speed, orientation, and contact force would be observed using 2D stylus and 3D disc scanning confocal microscope technology to measure topography of impressed punch marks. Soons’ algorithms would register topography data and calculate a measure. In this manner there is no human intervention or bias. Despite the importance of toolmark analysis in the forensic sciences, the imaging and comparison of toolmarks remains a manual and time-consuming endeavor. Zheng and Soons research resulted in early promising results and they had hoped for their research to be used in future tool mark identification as a more exact manner of pattern matching.
Even more recently, Nicholas Petraco, Peter Diaczuk, Thomas Kubic, Dale Purcell, Brooke Weingerand and Peter Shenkin have been awarded $700,000 by the National Institute of Justice to carry out research on the application of Machine Learning and Statistical Pattern Recognition to toolmarks. Their research is aimed at addressing many of the issues raised in the recent National Academy of Sciences report on the foundations of the forensic sciences.
Opinion
In our opinion, tool mark analysis cannot be considered reliable because it is not presently an exact science. Class characteristics like design factors, subclass characteristics like features produced during manufacture and individual characteristics like the manner in which the user holds the tool are all factors that lead to tool mark analysis being subjective. In these types of situations the value, presence and relevance of each trait is determined by the examiner and can vary from professional to professional. According to the Frye Standard, evidence must be generally accepted by the scientific community. As outlined by Dr. Schwartz, there are at least three major sources for misidentification diminishing the reliability of tool mark analysis. A tool mark as evidence outside the courtroom may be material, but in the courtroom it has more of a probative value, and this is how we think it should be used. We believe that tool mark analysis should still be admissible in court, but without material value since it cannot always be significant to the crime. However, we believe that if Zheng and Soons’ research is used in the future, this will be a more reliable piece of evidence. Lastly, the firearms and tool mark identification community has recently opened a database operating system used to image fired cartridge components. As of 2007, there have been over 23,000 “cold hits” as a result of all the evidence entered into the database. These “hits” link cases together that were otherwise not known to be related and help to solve crimes (See chart below). With this additional knowledge, we also think that tool mark analysis should be used to solve cold cases.
Controversy Regarding the Reliability of Toolmark Analysis
On one hand, tool mark analysis is entirely reliable. This is according to The Science of Firearm & Tool mark Identification, which is based on two fundamental propositions. The first preposition says “Tool marks imparted to objects by different tools will rarely if ever display agreement sufficient to lead a qualified examiner to conclude the objects were marked by the same tool. That is, a qualified examiner will rarely if ever commit a false positive error (misidentification).” In other words, this quotes that the examination can never have a misidentification. The argument is that it is too unlikely that tools will cause marks with enough contrast to confuse a professional.
On the other hand, it has been argued that tool mark examination can and does meet the criteria set forth by Daubert (the American standard for admissibility). Many attorneys have sought to have the examiner’s testimony omitted from cases claiming that the examinations are not rooted solidly in science or even that the examiner’s conclusions are subjective and cannot be trusted, which is a limitation entirely contradicting the first preposition. But it is fair to say that a scientific foundation and objectivity are found in any experienced toolmark examiner’s toolmark comparisons. In recent years, to reinforce these ideas and resolve this issue, different groups have sought to make objective toolmark comparisons with the use of comparative statistical algorithms.
Even still, a recent article was published in The Columbia Science and Technology Law Review entitled “A Systemic Challenge to the Reliability and Admissibility of Firearms and Toolmark Identification.” The author, Dr. Adina Schwartz, is an Associate Professor with the John Jay College of Criminal Justice and the Graduate Center, City University of New York. Dr. Schwartz argues that “all firearms and tool mark identifications should be excluded until the development of firm statistical empirical foundations for identification and a rigorous regime of blind proficiency testing.” She discusses the scientific issues related to firearms and tool mark identification including the types of tool marks (class, subclass, individual) and three major sources of misidentification. These include; individual characteristics that are comprised of non-unique marks, subclass characteristics may be confused with individual characteristics and individual marks of a particular tool change over time. It has also been argued that fundamental problems not cured by development of “computerized firearms database”
In 2009, the National Academies published the report "Strengthening Forensic Science in the United States: A Path Forward," which called into question the objectivity of conclusions based on visual toolmark identification by examiners. A major concern is the lack of precisely defined and scientifically justified protocols. Researchers Alan Zheng and Johannes Soons of the Summer Undergraduate Research Fellowship have responded to this criticism by seeking to strengthen the scientific basis of the toolmark identification process through the use of mathematically objective similarity metrics applied to direct measurements of the surface topography. The work builds on research by the Surface and Nanostructure Metrology Group and the Law Enforcement Standards on forensic firearm identification using toolmarks found on bullets and cartridge cases. Identifying a particular tool from a pool of consecutively manufactured tools is a challenging scenario, as the tools are more likely to have similarities in their geometry. Instead of continuing to use the method in which comparisons are completed by examiners Zheng designed an experiment where the tool motion, speed, orientation, and contact force would be observed using 2D stylus and 3D disc scanning confocal microscope technology to measure topography of impressed punch marks. Soons’ algorithms would register topography data and calculate a measure. In this manner there is no human intervention or bias. Despite the importance of toolmark analysis in the forensic sciences, the imaging and comparison of toolmarks remains a manual and time-consuming endeavor. Zheng and Soons research resulted in early promising results and they had hoped for their research to be used in future tool mark identification as a more exact manner of pattern matching.
Even more recently, Nicholas Petraco, Peter Diaczuk, Thomas Kubic, Dale Purcell, Brooke Weingerand and Peter Shenkin have been awarded $700,000 by the National Institute of Justice to carry out research on the application of Machine Learning and Statistical Pattern Recognition to toolmarks. Their research is aimed at addressing many of the issues raised in the recent National Academy of Sciences report on the foundations of the forensic sciences.
Opinion
In our opinion, tool mark analysis cannot be considered reliable because it is not presently an exact science. Class characteristics like design factors, subclass characteristics like features produced during manufacture and individual characteristics like the manner in which the user holds the tool are all factors that lead to tool mark analysis being subjective. In these types of situations the value, presence and relevance of each trait is determined by the examiner and can vary from professional to professional. According to the Frye Standard, evidence must be generally accepted by the scientific community. As outlined by Dr. Schwartz, there are at least three major sources for misidentification diminishing the reliability of tool mark analysis. A tool mark as evidence outside the courtroom may be material, but in the courtroom it has more of a probative value, and this is how we think it should be used. We believe that tool mark analysis should still be admissible in court, but without material value since it cannot always be significant to the crime. However, we believe that if Zheng and Soons’ research is used in the future, this will be a more reliable piece of evidence. Lastly, the firearms and tool mark identification community has recently opened a database operating system used to image fired cartridge components. As of 2007, there have been over 23,000 “cold hits” as a result of all the evidence entered into the database. These “hits” link cases together that were otherwise not known to be related and help to solve crimes (See chart below). With this additional knowledge, we also think that tool mark analysis should be used to solve cold cases.