January-February 2018

Responding to PCAST-based ­attacks on forensic science

Benjamin I. Kaminar

Assistant County and District Attorney in Lamar County

In September 2016, a relatively obscure federal commission issued a report calling into question nearly every forensic science discipline currently used by law enforcement. While this report by the President’s Council of Advisors on Science and Technology (PCAST) was immediately controversial within the forensic science community, it has taken much longer for both prosecutors and defense attorneys to begin utilizing it during expert testimony. However, a recent article in the American Bar Association’s Criminal Justice magazine indicates that PCAST Report-based attacks on forensic science are on the horizon.1 With an understanding of what PCAST is, what its report says, and the problems with the report, we prosecutors can be ready to respond to these attacks.

What is PCAST?
“PCAST is an advisory group of the nation’s leading scientists and engineers who directly advise the President and Executive Office of the President.”2 It is intended to make “policy recommendations in the many areas where understanding of science, technology, and innovation is key to strengthening our economy and forming policy that works for the American people.”3 PCAST’s published reports since 2014 have addressed such wide-ranging subjects as big data and privacy, systems engineering in healthcare, and ensuring long-term U.S. leadership in semiconductors. While PCAST’s membership consists of individuals who are distinguished in their fields, it is critical to note that virtually none of those fields are forensic disciplines. Its membership includes a systems engineer, a physician specializing in geriatric medicine, a string physicist, and the Executive Chairman of Alphabet, Google’s parent company.

The PCAST Report
The report itself focuses on six “forensic feature-comparison methods” that attempt to determine whether evidentiary samples can be associated with source samples based on the presence of similar patterns, characteristics, features, or impressions.4 The methods it examines are:
•    DNA analysis of single-source and simple mixture samples,
•    DNA analysis of complex mixture samples,
•    bitemark analysis,
•    latent fingerprint analysis,
•    firearm and toolmark analysis, and
•    footwear analysis.5
    The report primarily addresses the reliability of these disciplines for purposes of admissibility under Federal Rule 702 (and by implication, its state equivalents, including Texas’ Rule 702 and Kelly test). Although the report claims to leave decisions about legal admissibility to the courts,6 it also attempts to establish its own threshold tests for admissibility based on error rates.7 The report creates its own concept, termed “foundational validity,” which “requires that it be shown, based on empirical studies, to be repeatable, reproducible, and accurate.”8 The report then says that “foundational validity” corresponds to the legal requirement of “reliable principles and methods.”9 “Validity as applied” means “that the method has been reliably applied in practice”10 and corresponds to the legal requirement of proper application of the principles and method in the particular case.11
    The report heavily emphasizes error rates in both foundational validity12 and validity as applied13 through studies that were designed to determine the error rate for a method by evaluating the error rate of individual analysts. The design of those studies and their focus on individual analyst error rates is at odds with reality in the laboratory. For example, standard practice in virtually all accredited laboratories involves quality assurance mechanisms that are designed to detect errors by individual analysts. In fact, the operation and effectiveness of such quality assurance mechanisms are key components of the accreditation process.14 However, the report relied upon studies that did not allow verification, suggesting the error rate in practice is lower than calculated.15 Additionally, the report relied on a latent fingerprint study in citing a false positive rate that itself contained a calculation error that PCAST failed to detect.16 Furthermore, by focusing on the error rate of individual analysts, PCAST fails to consider that the studies do not show what the error rate of the discipline or method is, but instead show the error rate of the individual analysts studied.17

Responses from the forensic science ­community
Understandably, the report prompted a number of responses in rebuttal throughout the forensic science community and the federal government. Then-Attorney General Loretta Lynch released a statement advising that the U.S. Department of Justice would not adopt the report’s recommendations.18 The FBI published comments noting the report’s “subjectively derived” criteria and disregard of numerous published studies that would meet the report’s criteria for “foundational validity.”19 The American Society of Crime Lab Directors also released a response detailing the flaws in the report’s methodology.20 The response of the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) noted PCAST’s failure to address firearms and toolmark studies that had been submitted for consideration.21 The Association of Firearm and Tool Mark Examiners’ response pointed out that the report’s insistence upon a single report being the benchmark for foundational validity suggested a “fundamental lack of understanding” of the extent of research in the field.22

Use by the defense
Despite these numerous problems with the report’s methodology and findings, prosecutors should expect to see an increasing number of challenges to the State’s experts based upon the report. In the Summer 2017 issue of Criminal Justice, the chief defender for the Federal Public Defender’s Office in Puerto Rico laid out a four-step strategy for using the report to exclude or discredit the State’s forensic experts.23
•    Step One is an argument to begin indoctrinating the judge through appeals to the judge’s emotions rather than reason.24 “By establishing an alternative emotion, we increase our chances that the judge’s demand will pay homage to the NRC25 and PCAST reports while precluding or limiting the introduction of the government’s damaging expert testimony.”26
•    Step Two tries to exclude the expert testimony entirely by showing that the PCAST Report is “novel evidence” that should call into question well-established forensic disciplines.27
•    Step Three, assuming failure to exclude the testimony, is to limit it, especially in terms of the expert’s certainty as to his conclusions.28
•    Finally, Step Four is to neutralize the expert testimony by a competing expert.29 Interestingly, the author does not recommend bringing a defense expert in the same field, as that would give legitimacy to the State’s use of the forensic discipline.30 Instead, he recommends bringing in an academic from a local university, even if that person knows “little about the particular field in question.”31

Responding to the defense
Once we know the expected attacks on forensic disciplines using the report, it becomes much easier to defeat them. At any 702 hearing, it is critical to highlight for the judge the significant flaws in the report’s methodology, the composition of its authoring body, and the fact that the report is the product of a policy-oriented (rather than science-oriented) body and process. As noted above, much of PCAST’s membership is from outside the forensic disciplines addressed. Undeterred by this lack of subject matter expertise, PCAST issued a number of “scientific findings” regarding the validity of various disciplines.32 The report’s “scientific findings” are especially questionable given that the report was not itself peer-reviewed prior to release; ironically, one of its criteria for any study to be acceptable in determining validity was that it be peer-reviewed. The report also cannot be considered a properly conducted scientific literature review,33 even though the report claims to have been one.34 A scientific literature review should include a summary, classification, and comparison of each article reviewed.35 PCAST purports to have reviewed over 2,000 papers in its report,36 but it fails to provide individual analyses of them.37
    With all of those flaws noted, argue to the judge that any statements contained in the report should not be admissible under the Rule 803(18) exceptions for learned treatises because the report is not accepted as a reliable authority. The responses to the report from the various forensic discipline working groups, as well as the Department of Justice and other federal agencies, should highlight to the judge that the report is not a reliable authority. We should also attempt to obtain specific findings of fact from the court regarding the report’s flaws to support appropriate conclusions of law. Findings that directly address the report’s authorship, peer-review process, and general rejection throughout the forensic science community will be relatively straightforward matters to support from the record and should lead to conclusions regarding its unreliability and rejection. If the defense offers a copy of the report for the record, prosecutors must ensure that we offer copies of any reports, studies, affidavits, or statements supporting the State’s opposition. Because our counterattack is against the report as a whole, responses from disciplines outside the scope of the motion at issue are still of value (e.g., filing the ATF and AFTE responses when opposing a motion to exclude latent print analysis). For example, one opposition to a motion to exclude firearm and tool mark testimony used by the U.S. Attorney’s Office in the District of Columbia included an appendix that totaled over 1,100 pages. Establishing unreliability in the record early on will help shape appellate arguments regarding the defense’s challenge to forensic expert testimony. It will also help rebut attempts to use the report as “novel evidence” to attack forensic disciplines.
    Next, even if we preclude direct use of the report, we still have to prepare our expert witnesses for attacks based upon it. Whether preparing a DNA analyst, latent print examiner, or firearms and toolmark examiner, make sure that trial preparation includes reviewing the body of validation studies for the relevant field, especially those directly addressed in the report. For any study directly addressed in the report, such as the exclusion of verification processes and use of incorrect statistical calculations, our experts should be familiar with the flaws in them and their use by PCAST. This is also the point where prosecutors can anticipate more discipline-specific attacks and tailor our responses accordingly.
     In some cases, we may want to keep our powder dry and let the report come in. If trying a case before a judge who will let the report in regardless of the State’s objections (or if being used by a defense expert whom we can discredit on cross-examination), there may be tactical value in not tipping our hand before dissecting the report in front of the jury. Whether to attempt outright exclusion or using as fodder for cross-examination will be a situation-specific call by the prosecutor at trial.

Firearms and toolmark examiners
With a firearms and toolmark examiner, we can expect a PCAST-based challenge to claim that there has been a single validation study for the field, which is insufficient to establish either foundational or validity as applied. Such a challenge will likely further attack the discipline as being entirely subjective. Our response in this scenario would focus on consecutive manufacture studies and the 10-barrel study.38 At its heart, firearms and toolmark identification relies upon the fact that even items manufactured to the same specifications will have minor variations due to the gradual, microscopic wear of the tools manufacturing them. In the case of firearms, this means that otherwise identical barrels will have slight variations in their rifling due to the wear on the tools that made the barrels. These slight variations in turn leave slight but discernible variations on the marks left on expended cartridge cases or bullets. An examiner may therefore determine whether a bullet fired from an unknown weapon may be included or excluded as a match for a bullet fired from a known weapon.
    As the variations in rifling are the result of wear on the manufacturing tools over time, barrels rifled consecutively by the same tool would logically show the least variation. Consecutive manufacture studies evaluate whether examiners can associate a questioned bullet to the correct barrel in one of a set of consecutively rifled barrels. The 10-barrel study was a long-term, consecutive-manufacture study involving more than 500 participants from 20 countries who were evaluated on whether they could associate a questioned bullet to one of 10 consecutively rifled barrels. That study showed that of 7,605 questioned bullets, 7,597 were correctly associated with no false positives; three bullets were reported as too damaged to use and five were reported as unable to make a determination.39 Reviewing specific consecutive-manufacture studies and the 10-barrel study with an examiner before a 702 hearing, in conjunction with dissecting the PCAST Report’s methodological flaws, should ensure the admissibility of the examiner’s testimony.

Latent prints
Unlike firearms and toolmark analysis, PCAST found that latent print analysis had foundational validity. Given that, we can expect PCAST-based challenges to focus on validity as applied with particular emphasis on error rates. The report cited studies showing that latent print analysis error rates were as high as 4.2 percent under the ACE (analysis, comparison, evaluation) method.40 This line of attack is vulnerable in two areas. First, although the cited studies focused on examiners using the ACE method, common practice is to utilize the ACE-V method, which adds a verification step performed by a second examiner.41 The Miami-Dade study, which showed the highest error rate among examiners, included a small sample of a verification step; of the 15 false positives that were submitted to a verification step, 13 were excluded as matches and two were deemed inconclusive.42
    Second, as briefly mentioned above, the Miami-Dade study contained a statistical calculation error that was also undetected by PCAST. The OSAC Friction Ridge Subcommittee response noted that the proper statistical calculation would be the number of false positives divided by the number of trials in which a false positive response could occur.43 The Miami-Dade study used multiple reference prints for each questioned print, presenting multiple opportunities for a false positive; the authors instead treated the multiple reference prints as a single opportunity for a false positive.44 This had the effect of overstating the percentage of false positives; once corrected, the error rate should be 1.1 percent.45
    Pre-trial preparation with our latent print examiners should anticipate these attacks. Knowing that error rates will be the defense focus, prosecutors can prepare our examiners to discuss the difference between ACE and ACE-V methods and then be able to explain verification not only as a laboratory practice, but also as a requirement for accreditation. The key point to be made is that latent print examination in practice is subject to more stringent controls and requirements than when it is tested in an academic study. Because the Miami-Dade study showed the highest error rate, we should also prepare our examiner to discuss the statistical flaws in the study and the results of Miami-Dade’s small verification sample.

DNA
The PCAST Report had mixed “findings” regarding DNA analysis. For single source and simple mixtures, it found both “foundational validity” and “validity as applied,” provided that analysts were properly trained and subjected to proficiency testing. PCAST-based attacks on single source and simple mixture analyses will therefore likely focus on the analyst’s training and methodology and should not differ significantly from pre-PCAST attacks. As even the report found these DNA analyses to have “foundational validity,” attacks on training and methodology are classic “weight, not admissibility” concerns.
    On the other hand, the report took significant issue with the interpretation of complex DNA mixtures (mixtures with more than two contributors). Although the report noted that the laboratory processing of complex mixtures was the same as for single source and simple mixtures, it found that complex mixture interpretation was unreliable due to the lack of standards or guidelines for that approach. As a result, the report held that the entire Combined Probability of Inclusion (CPI) statistic used in complex mixtures lacked validity. The report also addressed the use of probabilistic genotyping software, which it called “promising,” but it also claimed that such software still lacked sufficient testing to be considered “foundationally valid.”46
    In responding to this criticism from PCAST, Dr. Bruce Budowle of the University of North Texas’s Center for Human Identification notes that the report conflates two issues regarding complex mixtures and CPI. According to Dr. Budowle, PCAST begins by properly addressing the lack of detailed guidelines relating to interpretation of mixtures. However, PCAST then holds that because there are insufficient guidelines concerning interpretation of mixtures, the CPI statistic used to calculate the likelihood of an individual being a contributor to the interpreted mixture is invalid. Budowle observes that this is an error because the mathematical principles from which that likelihood is derived are the same ones used in single-source random match probability (RMP), which the report had determined to be valid only pages earlier.47 Regarding PCAST’s rejection of probabilistic genotyping as insufficiently studied to be valid, Budowle writes that PCAST failed to contact any of the laboratories that had conducted internal validation studies before implementing probabilistic genotyping software to determine whether their research was consistent with the published articles available.48 In fact, “There is no indication that the PCAST Committee made any effort to become informed to opine on the reliability and validity of probabilistic genotyping.”49
    When faced with a PCAST-based attack upon DNA mixtures, our response will depend upon the statistical method used during analysis. If a CPI analysis was done, the attack will likely be upon the lack of uniform guidelines for interpretation. A State’s analyst should be familiar with her laboratory’s guidelines for setting stochastic thresholds under various conditions and be able to explain not only what those thresholds are but also why they are set at that level. To head off attacks on the CPI’s calculations, the analyst should also be prepared to discuss the mathematical principles underpinning that statistic and how they are identical to the mathematical principles behind the RMP. If a probabilistic genotyping analysis was done, we should expect the attack to be focused on whether probabilistic genotyping has been properly validated. DPS and some other forensic labs in Texas are in the process of moving—or have already moved—to probabilistic genotyping using STRMix. While there may be older cases involving CPI, this move means that going forward, most of our cases will involve attacks upon probabilistic genotyping.
    A recent Michigan case provides us with a blueprint for addressing that attack. In State of Michigan v. Alford,50 the trial court was presented with a Daubert/702 challenge to both probabilistic genotyping as a whole and to the analyst’s qualifications as an expert. Prosecutors responded with testimony from one of the three creators of STRMix, the probabilistic genotyping software used in that case, who explained the principles upon which the software operated and how it analyzed DNA.51 The prosecutors then presented testimony from the individual responsible for quality assurance in the Michigan State Police Forensic Science Division.52 He testified as to the validation processes used before the software was adopted, which consisted of developmental validation by the software developer and the internal validation conducted by each laboratory.53 One of the methods used in internal validation was the use of mock samples, which are mixtures derived from DNA already contained in the laboratory. This creates a “ground truth” of known components, which analysts could then utilize to verify that the software analyzed and reported the mixture accurately. Finally, he testified as to the competency testing given to each individual analyst and the internal peer review process for analytical results.54 As a result, the Michigan court issued 22 pages of findings of fact and conclusions of law and ultimately held that under Michigan Rule of Evidence 702, the analysis was based upon sufficient facts or data and reliable principles and methods, and that those principles and methods were reliably applied. Under Daubert, it found that the program had received adequate validity testing across the United States, had been peer reviewed in approximately 17 published articles, was generally accepted, had a well-studied error rate, and had sufficient internal validation for its processes. Armed with those findings as a blueprint, prosecutors everywhere should be able to prepare a detailed response to any challenges to probabilistic genotyping.
    Once we’ve excluded the PCAST Report from being admitted and ensured the admissibility of our expert’s testimony, we still have to address defense experts who may parrot the report’s “scientific findings.” We may be able to use a 702 hearing against them to exclude them entirely, especially if the defense followed the Step Four recommendation in Criminal Justice to find any academic to testify. The line of inquiry to take with a defense expert will depend upon his background and qualifications. If he has a background in the forensic discipline in question, prosecutors will want to focus on the flaws in the report’s use of the validity studies it cites, as well as the responses from the scientific working groups. If he is from outside the discipline, it becomes a much more straightforward question of whether he is even qualified to testify on the field in question. However, there may be tactical value in allowing the defense expert to take the stand, then dismantling his testimony in front of the jury.

Research and advances
While a number of the PCAST Report’s “scientific findings” are methodologically flawed, we should not discount recommendations or efforts to improve forensic disciplines. Currently, the field of latent print analysis is undergoing efforts to move from subjective matching to objective probability reports. For example, the Defense Forensic Science Center has developed, validated, and implemented a software application called FRStat, “which facilitates the evaluation and reporting of the statistical strength of friction ridge skin comparisons.”55 This software expresses results as “an estimate of the relative probability of a given amount of correspondence when impressions are made by the same source rather than different sources.”56 This moves latent print analysis in the direction of DNA analysis as an objective comparison method. Similar efforts are underway for the field of firearms and tool mark analysis.57 Finally, as discussed earlier, DNA analysis of complex mixtures is moving toward adoption of probabilistic genotyping, which reduces the subjectivity in interpretation of profiles.

Conclusion and resources
Although a number of problems with the PCAST Report have been outlined here, this overview provides only a starting point for addressing the defense’s use of the report. For more in-depth discussion of the report as it pertains to specific disciplines, various scientific working groups have published responses highlighting its problems, and several of them are cited in the endnotes below. Those responses also point us to relevant studies in addition to the ones presented here. The National Attorneys General Training and Research Institute conducts a forensic science symposium that features some of the leading experts in forensic disciplines and prosecutors specializing in forensic science cases—the 2017 symposium also served as the inspiration for this article.
    As the PCAST Report becomes more widely disseminated and defense attorneys have more opportunities to share report-based attacks on forensic science, prosecutors must be ready to respond. By highlighting the report’s scientific flaws and lack of reliability, we will be better able to protect forensic disciplines and our expert witnesses from specious attacks while also highlighting the rigor and integrity of forensic disciplines.

Endnotes

1  Vos, Eric Alexander, Using the PCAST Report to Exclude, Limit, or Minimize Experts, Criminal Justice, Summer 2017, at 15.

2  https://obamawhitehouse.archives.gov/administration/ eop/ostp/pcast/about. (The current Administration has not yet issued an executive order re-establishing PCAST or naming members.)

3  https://obamawhitehouse.archives.gov/administration/eop/ ostp/pcast/about.

4  PCAST Report p. 23.

5  PCAST at p. 7.

6  PCAST at pp. 4, 43.

7  PCAST at pp. 53, 56.

8  PCAST at p. 4.

9  Corresponding to two of the three prongs under Kelly v. State, 824 S.W.2d 568 at 573 (Tex.Crim.App. 1992).

10  PCAST at p. 5.

11  Corresponding to the third Kelly prong.

12  “An empirical measurement of error rates is not simply a desirable feature; it is essential for determining whether a method is foundationally valid.” PCAST at p. 53.

13  “From a scientific standpoint, the ability to apply a method reliably can be demonstrated only through empirical testing that measures how often the expert reaches the correct answer.” PCAST at p. 56.

14  See ANAB website http://www.anab.org/forensics-accreditation/iso-iec-17025-forensic-labs.

15  See Organization of Scientific Area Committees (OSAC) Firearms and Toolmarks Subcommittee (2016) response at p. 5 (https://www.theiai.org/president/20161214_FATM_Response_to_PCAST.pdf) and OSAC Friction Ridge Subcommittee response at p. 2 (https://www.theiai.org/president/20161214_PSAC-FR_PCAST_response.pdf).

16  See OSAC Friction Ridge Subcommittee response at p. 3.

17  Open letter from Bruce Budowle, Director, University of North Texas Center for Human Identification (June 17, 2017) (on file with the author).

18  www.wsj.com/articles/white-house-advisory-council-releases-report-critical-of-forensics-used-in-criminal-trials-1474394743.

19  www.fbi.gov/file-repository/fbi-pcast-response.pdf.

20  http://pceinc.org/wp-content/uploads/2016/10/20160930-Statement-on-PCAST-Report-ASCLD.pdf.

21  https://www.theiai.org/president/20160921_ATF_PCAST_ Response.pdf.

22 https://afte.org/uploads/documents/AFTE-PCAST-Response.pdf.

23  Vos at p. 15.

24  Id.

25  National Research Council.  Strengthening Forensic Science in the United States:  A Path Forward.  The National Academies Press. Washington DC.  (2009).

26  Vos at p. 16.

27  Vos at pp. 16-17.

28  Vos at p. 17.

29  Vos at pp. 18-19.

30  Vos at p. 19.

31  Id.

32  President’s Council of Advisors on Science and Technology (2016). Forensic Science in Criminal Courts: Ensuring Scientific Valdity of Feature-Comparison Methods at p. 65 et. seq. (PCAST).

33  Melson, Kenneth (2017, July) Attacks on Forensic Science: NRC-I, NRC-II, NAS and the PCAST Report Presentation at the 2017 Forensic Science Symposium of the National Attorneys General Training and Research Institute.

34  PCAST at p. x.

35  Melson, 2017.

36  PCAST at p. 2.

37  Melson, 2017.

38  James E. Hamby, David J. Brundage, and James W. Thorpe, The Identification of Bullets Fired from 10 Consecutively Rifled 9mm Ruger Pistol Barrels: A Research Project Involving 507 Participants from 20 Countries, AFTE Journal 99 (2009).

39  Id. at p. 107.

40  Igor Pacheco, Brian Cerchiai, and Stephanie Stoiloff, Miami-Dade Research Study for the Reliability of the ACE-V Process: Accuracy and Precision in Latent Fingerprint Examinations (2014) (unpublished report on file with the United States Department of Justice) at p. 53.

41  OSAC Friction Ridge Subcommittee response at p. 2.

42  Pacheco, et al. at p. 55.

43  OSAC Friction Ridge Subcommittee response at p. 3.

44  Id.

45  Id. at pp. 3-4.

46  PCAST at p. 81.

47  Budowle letter at p. 10.

48  Id. at p. 11.

49  Id.

50  Michigan v. Alford, No. 15-696-FC (30th Circuit Court) (2016).

51  Opinion and Order-Michigan v. Alford, No. 15-696-FC (30th Circuit Court) (2016) at pp. 3-7.

52  Id. at pp. 7-9.

53  Id. at p. 8.

54  Id. at p. 9.

55  Defense Forensic Science Center information paper (March 9, 2017) (on file with the author).

56  Id.

57  https://www.nist.gov/programs-projects/statistics-ballistics-identification.