The Reliable Application of Fingerprint Evidence

Introduction

In November 2017, a state appellate court did something almost unprecedented: It held that a trial judge made an error by admitting testimony on latent fingerprinting.1  This ruling did not make the news and it has not been noted in legal publications.  It should be.

Few courts have carefully examined the reliability of latent fingerprint testimony.  Fingerprint testimony has been admitted in federal and state courts for decades, largely unquestioned.2  When judges have questioned such evidence, as federal judge Louis Pollak did in United States v. Llera Plaza, the government response has resulted in backpedaling.  In the Llera Plaza case, Judge Pollak vacated his earlier ruling and found the fingerprint evidence admissible.3  When a state judge questioned fingerprint evidence,4 federal prosecutors removed the case and federally charged the defendant, so as to vacate the ruling.5

In the McPhaul case, the appellate panel found error based on the unreliable application of latent fingerprinting.  The panel did not reverse the defendant’s conviction, however, finding the error to be harmless.  The ruling has broader implications for as-applied challenges to the forensic testimony commonly used in criminal cases, in which judges have often not carefully examined reliability either for a forensic method in general, or how it was applied in a given case.  Many forensic techniques rely on the subjective judgment of an expert, who may not be able to fully explain how they concluded that a fingerprint, ballistics, or other types of pattern evidence were a “match,” except to cite to their own experience.6  This sleeper ruling should awaken interest in the reliable application of forensic methods in individual cases.

After Daubert v. Merrell Dow Pharmaceuticals, Inc. was decided by the U.S. Supreme Court, adopting new judicial gatekeeping standards for expert evidence,7 many asked when and whether forensic techniques, largely based on the experience and training of experts, would be more rigorously examined by judges.  The revision to Federal Rule of Evidence 702 in 2000 to reflect Daubert, and to add additional requirements that evidence be based on “reliable principles and methods” that are “reliably applied” to the facts of a case, were intended to make the gatekeeping task of a judge more rigorous.8  When only marginal reconsideration of traditional forensics followed, despite the advent of modern DNA testing, which put the lack of quantified information in those earlier techniques into perspective, many suspected that in criminal cases, judges were not carefully applying Daubert.9  These gatekeeping rules seemed “irrelevant” to actual practice in court.10  That said, renewed attention to the limitations of forensics post-Daubert did result in additional research funding, scholarly attention to error rates, and scientific scrutiny.11

The irrelevance of Daubert seemed particularly acute in the area of latent fingerprint comparison.  In the years since the Llera Plaza ruling, a series of other federal courts have found fingerprint evidence admissible.12  Some of those courts admitted, as the Third Circuit did in United States v. Mitchell, that there are not adequate studies on the reliability of fingerprint analysis.  The courts did engage with the limitations of the discipline of fingerprint analysis more so than they had in the past, but in the end concluded that the evidence should be admitted because there is an “implicit history of testing,” where experts do not themselves describe making errors—except in rare cases—and any error rate must be “very low.”13  In United States v. Baines, one of the few additional post-Daubert federal appellate opinions to discuss latent fingerprint evidence in any detail, the court similarly emphasized that the evidence of reliability comes not from any empirical studies, but from the use of the technique for “almost a century.”14

As Simon Cole has observed, judges have grandfathered in latent fingerprint evidence based on its longstanding use, and not based on any evidence that it is in fact reliable.15  State judges have frequently done the same.  For example, a recent ruling by an Arizona appellate court emphasized, “[O]ur supreme court has sustained convictions based solely on expert testimony about fingerprint or palm print evidence because the evidence is sufficiently reliable.”16  What makes those rulings all the more surprising, though, is not just that they do not take seriously the requirements of Daubert and Rule 702, instead emphasizing traditional acceptance and the flexibility of their gatekeeping obligation.  It is that they also specifically fail to account for far more recent scientific research regarding the limitations and appropriate use of forensics generally, and latent fingerprint evidence specifically.17

In Part I, I summarize what has changed in the scientific research and understanding of latent fingerprint evidence.  In Part II, I explore the litigation in the McPhaul case and the reasoning adopted by the appellate court.  In Part III, I discuss the implications for this ruling for judicial gatekeeping, and for forensic expert evidence more broadly.

I. The Problem: Reliability and Latent Fingerprinting

The body of evidence concerning the reliability of fingerprint evidence has considerably advanced over the past decade and a half.  As with any technique that relies on human experience and judgment, there is an error rate.  The fact that error rates exist in latent fingerprinting is nothing new.  Proficiency studies in fingerprinting have been conducted since the late 1970s.18  In particular, commercial proficiency tests in the mid-1990s attracted widespread attention because of the large number of participants that made errors on the tests.19  Those tests were not designed to assess error rates in general, but they certainly made salient that errors do occur at a time when latent fingerprint examiners claimed that the tests were infallible and that when properly conducted the technique had an error rate of “zero.”20  Nothing made error rates in fingerprinting more publicly salient than the error in the Brandon Mayfield case, in which a lawyer in Portland, Oregon was falsely accused of playing a role in the Madrid terrorist bombing based on erroneous fingerprint matches by multiple FBI analysts.  An FBI expert had called it a “100 percent” certain match, but it was wrong, as Spanish authorities discovered.21  In response to the error, the Department of Justice made a series of recommendations for improved handling of latent fingerprint analysis.22

The National Academy of Sciences (NAS) issued landmark findings on forensic disciplines in a 2009 Committee report.23  Those findings included statements that, while fingerprint comparisons have served as a valuable tool in the past, the methods used in the field—the ACE-V method, for Analysis, Comparison, Evaluation, and Verification—are “not specific enough to qualify as a validated method for this type of analysis.”24  The report found that merely following the steps of that “broadly stated framework” “does not imply that one is proceeding in a scientific manner or producing reliable results.”25  It highlighted that “sufficient documentation is needed to reconstruct the analysis” that examiners engage in.26  In addition, it asserted that error rates exist, and none of the variables that fingerprint examiners rely upon have been “characterized, quantified, or compared.”27  Absent any statistical data, fingerprint examiners are relying on “common sense” or “intuitive knowledge,” but not validated information or research.28

The Presidential Council of Advisors on Science and Technology (PCAST) 2016 report concluded that while “foundationally valid,” fingerprint analysis should never be presented in court without evidence of its error rates and of the proficiency or reliability of not just the method, but the particular examiner using the method.29  The PCAST report noted that error rate studies had now been conducted on latent fingerprint analysis.  In particular, two black box studies (referring to studies that independently test experts for errors using realistic materials) were conducted that were fairly methodologically sound and found nontrivial error rates: The false-positive error rate “could be as high as 1 error in 306 cases,” based on an FBI study; or a rate of “1 error in 18 cases,” based on a study by the Miami-Dade police laboratory.30

In the PCAST report, the focus was also placed more squarely on the individual expert: If the technique is a black box, and relies on the experience and training of a particular person, then how reliable is that person?  Courts had typically not focused on that question.  Rulings that rest on 702(d), which focuses on the application of principles and methods to a case, have been exceedingly rare.31  It is more generally not common for judges to consider evidence of the proficiency of experts, as Gregory Mitchell and I detail in a forthcoming Article.32

The American Association for the Advance of Science (AAAS) 2017 report added that fingerprint examiners should avoid statements that contribute to the “misconceptions” shared by members of the public due to “decades of overstatement by latent print examiners.”33  Specifically, they asserted that terms like “match,” “identification,” “individualization,” and other synonyms should not be used by examiners, nor should they make any conclusions that “claim or imply” that only a “single person” could be the source of a print.34  Instead, latent fingerprint examiners should at most state that they observe similarity between a latent print and a known print, and that a donor cannot be excluded as the source.35

II. North Carolina v. McPhaul

In November 2017, the state appellate court in North Carolina v. McPhaul identified a reliability problem with latent fingerprinting that is applicable to a wide range of forensic disciplines.36  The prosecution had introduced expert testimony on latent fingerprint comparison at trial, and the expert testified that prints found at the crime scene matched the defendant’s known prints.  That much was nothing out of the ordinary.

The defendant, Juan McPhaul, had been indicted on charges of attempted first-degree murder, assault, and robbery with a dangerous weapon, among other charges, for stealing “pizza, chicken wings, a cell phone and U.S. currency of the value of approximately $600.00,” from a Domino’s Pizza delivery driver in Raeford, North Carolina in 2012.37  The victim, who was knocked unconscious, later told the police that two black men with dreadlocks had attacked him in front of a vacant house.  Police lifted fingerprints from the outside of the delivery driver’s car.38  Police later tracked the IP address used to order the pizzas to a house near the crime scene and, once they obtained a warrant and searched the house, they found two empty pizza boxes, an empty chicken box, and labels indicating that the orders were to the vacant house where the attack occurred.39  Further latent prints were developed from those pizza and chicken boxes.40

The Fayetteville Police Department latent print examiner began by testifying about her experience as a latent print examiner since 2007, having worked at the police department since 1990.41  She described having taken “several hundred hours” of classes and training seminars, as well as having trained new officers,42 being a member of the International Association for Identification (IAI),43  and having had the experience of comparing “thousands” of latent fingerprints and identifying them with known inked prints.44  The examiner had previously testified six or seven times in state court and three times in federal court,45 and had never been given “non-expert status.”46  The court found the expert qualified, without objection to her qualification by the defense.47

Next, the expert described the process of latent fingerprint comparison.  She described different items, such as “bifurcations, ending ridges[,]” “enclosures, [and] dots” that examiners look for when they examine fingerprints.48  She then explained how the analysis proceeds: “The way an examination is rendered is you look at that latent print against the known impressions of an individual.”49  Then, “[w]hat you’re looking for are those same characteristics and sequence of the similarities.”50

The examiner concluded that prints on the car and on the pizza and chicken boxes all were “identified” as coming from McPhaul.51  Going further still, the examiner stated that “[i]t was the left palm of Juan Foronte McPhaul that was found on the back fender portion of the vehicle.”52  Similarly, the examiner stated that the print on the Domino’s chicken wing box was “[t]he right middle finger of Juan Foronte McPhaul,” as were the prints on a bent Domino’s pizza box, while it was his “left middle finger,” on a non-bent Domino’s pizza box.53

Those conclusions were incredibly decisive and went further than the guidance from leading forensic organizations—it was an unequivocal statement that the defendant left the print in question.  Instead, the national Scientific Working Group of Friction Ridge Analysis, Study and Technology (SWGFAST) stated that a latent fingerprint examiner should only state an “individualization,” meaning that “the decision that the likelihood the impression was made by another (different) source is so remote that it is considered as a practical impossibility.”54  That language is still incredibly strong, and scientific groups have pointed out real concerns with it, questioning what is meant by “practical impossibility” and the potentially misleading nature of the term “individualization,” which might convey that one can match a print “to the exclusion of all others” in the population.55  Most recently, the AAAS report from 2017 stated that latent fingerprint examiners should not use terms that imply that a single person was the source of a latent fingerprint.56  This expert not only failed to use that accepted (if still unsatisfactory) SWGFAST language, but went further by categorically stating that it was McPhaul’s print in an unqualified conclusion that admitted no possibility of error.  That was highly scientifically improper.  Nor was any error rate provided by the expert, nor any other information provided to qualify the conclusion.

The defense objected to this testimony as potentially unreliable and then argued that they “[did not] have any testimony thus far” as to how this examiner “examined and [came] to . . . conclusions.”57  The judge rejected those objections, but subsequent questioning explored those issues further.

When asked additional questions about how the work was conducted, the expert testified that it involved looking “back and forth,” agreeing that she proceeded by “going back and forth until satisfied” that the prints were a match,58 and that “[w]hat you’re looking for are those same characteristics and sequence of similarities.”59  The expert acknowledged that there is no “set point similarity” in the field, set number of points that one must find in latent fingerprints, or “set standard” for how much similarity an examiner must find.60  The examiner also acknowledged that the initial examination of the prints, conducted by another examiner, was not verified by a blind review, and that, instead, she knew what the first examiner had already concluded when she made her review.61

The judge, recognizing that the expert had not “testified as to what she did and how she reached these conclusions,”62 probed further, but was only able to elicit that the expert followed “a comparison process” and conducted an “examination.”63  The expert was unable to say what features of the prints were compared, what process was followed, or what the duration of the examination was.64  The expert simply reiterated that “[m]y conclusions, your Honor, is that the impressions made belonged to Mr. McPhaul.”65  The judge again asked, “What did you do to analyze them?” and the examiner responded, “I did comparisons—side by side comparisons . . . ”66  She could not say what points were found on the prints.67  The defense moved again to strike the testimony, noting “[t]hey have not testified as to how they reached a conclusion” and that “[t]he testimony thus far has been entirely conclusionary.”68  The judge denied the motion to strike,69  and then allowed the state another chance to ask the examiner questions about the process.  The examiner explained that a conclusion on latent fingerprint evidence is reached when “I believe there’s enough sufficient characteristics and sequence of the similarities.”70  Following the testimony of the examiner, the state rested its case.71  The jury convicted McPhaul in October 2015.72

The defendant appealed on several questions, including that:

The trial court erred when it admitted testimony from the latent fingerprint examiner without first determining that (1) the testimony was based upon sufficient facts or data, the testimony was the product of reliable principles and methods, and (3) the examiner had applied the principles and methods reliably to the facts of the case.73

The defendant did not challenge the general reliability of fingerprinting evidence.  He argued that the expert “provided no testimony prior to offering her opinions that showed she used well established or widely accepted methods in her analysis.”74  Instead, the expert had testified by ipse dixit—that the prints matched because they were found to be a match.  As previously noted, what the expert said was quite abbreviated: “The way an examination is rendered is you look at that latent print against the known impressions of an individual.  What you’re looking for are those same characteristics and sequence of similarities.”75

In response, the government highlighted how this expert had testified as to having had substantial experience, having done latent fingerprinting work in thousands of cases since 2007, having testified many times as an expert, having attended hundreds of hours of training, and having served as a member of the International Association for Identification.76  The government also argued that scientific reports, like the PCAST report, were not embraced by “legal authority.”77  Finally, the government argued that defense counsel could and did provide a “vigorous cross-examination” of the expert at trial.78

The defense also highlighted how the limitations of fingerprint comparisons and concerns about its reliability had been acknowledged both by a range of federal courts (although each ultimately admitted the evidence79) and by the National Academy of Sciences report, which itself had been cited by North Carolina courts.80  The PCAST report was a still greater focus of the defense briefing.  The defense noted that, according to the PCAST report, for a scientifically valid fingerprint analysis, an expert must:

(1) undergo relevant proficiency testing and report the results of the proficiency testing; (2) disclose whether she documented the features in the latent print in writing before comparing it to the known print; (3) provide a written analysis explaining the comparison; (4) disclose whether, when performing the examination, she was aware of any other facts of the case that might influence the conclusion; and (5) verify that the latent print is similar in quality to the range of latent prints considered in studies.81

The trial judge had asked a series of questions of the fingerprint examiner, even after the defense and prosecution had questioned the witness, perhaps because, as the defense suggested, the judge had “reservations” about reliability.82  Ultimately, as previously discussed, the expert, according to the defense, “did not testify as to the basis for her conclusion that the prints matched aside from saying she looked back and forth between the prints until she was satisfied.”  Further, the expert “did not document how she came to her conclusions and did not testify as to any similarities between the latent print and the known print.”83

The appellate court ruled for the defense.  Quoting the testimony of the latent fingerprint expert, the court scrutinized how the expert concluded that crime scene prints were “identified as” the same as those taken from the defendant.84  In ruling for the defense on this issue, the court also highlighted that in 2011, the North Carolina legislature had amended Rule 702 to adopt the “federal standard,” including language that required that expert testimony “applied” principles and methods “reliably” in a case.85  When the expert testified about how she reached conclusions in the case, however, she could only say that this was done based on “[m]y training and experience.”86  The appellate court concluded that the expert provided no “detail in testifying how she arrived at her actual conclusions in this case.”87  As a result, the panel held that it was error to admit the testimony, as there was no evidence that the methods and principles were reliably applied.88  The panel found any error to be harmless, however, in light of the other evidence in the case.  Though the victim could not identify McPhaul, and the forensic evidence was admitted in error, there was still McPhaul’s proximity to the wireless network used to order the pizzas, the circumstantial evidence that the stolen items were found in his home, and the similarity in the defendant’s appearance to the victim’s description.89

III. Implications for Rule 702(d) Analysis

The ruling in McPhaul could have been more detailed in its reasoning.  The court did not cite to the PCAST report or discuss studies of error rates in latent fingerprinting.  Likewise, there was no discussion of how unqualified were the conclusions of the examiner—who failed to discuss the possibility of an error or to comply with the current guidance in the field.  Moreover, the record in the case might make the ruling relevance limited.  After all, most experts should be able to say something more about their methods and the duration of their evaluation.  This expert said almost nothing, except that a comparison was made based on patterns and minutia points in fingerprints.

Regardless of the court’s brevity, the opinion could bolster practices requiring careful documentation of forensic examination.  Both the NAS report and the PCAST report advise careful documentation of all of the work done when conducting forensic analysis like latent fingerprint comparisons.  The PCAST report recommended that “examiners must complete and document their analysis of a latent fingerprint before looking at any known fingerprint, and should separately document any additional data used during comparison and evaluation.”90  While the North Carolina court did not spell out what specific items an expert should document, as the PCAST report did, the decision makes clear that more must be done.

At a more fundamental level, however, even if the expert had adequately described methods, perhaps that still would not be sufficient evidence of reliability.  An expert who relies on experience and training to make a visual comparison is a black box.  A bare conclusion is reached based on an internal subjective threshold, using criteria that cannot be fully explained.  There is no rule for how many similarities must be found between samples, or of what kind, in order to conclude that there is an identification.  For latent fingerprinting and a host of other forensic disciplines, the method is the expert, and the expert is the method.  The same concerns about how reliably an expert performed exist in any case in which the expert uses a black box method that is at least partly subjective.  What the McPhaul opinion could have discussed, moreover, was how the expert’s categorical conclusions—of a type that the AAAS report clearly stated was not appropriate—cannot be supported even by the existing principles and methods of latent fingerprinting.91  Those methods, while still lacking necessary safeguards, permit an examiner to observe similarities and to conclude that a donor cannot be excluded as a source, but they do not permit an examiner to conclude that an individual was in fact the source.92

Conclusion

The McPhaul opinion brings to the foreground the concern that forensic experts commonly testify as a black box, presenting conclusions based on subjective judgments without presenting any objective basis for those conclusions.  A judge may inquire into the basis for their opinions, and yet learn little from even a responsive expert, as in the McPhaul case itself.  If a technique is a black box, the expert may not be able to say much about how conclusions were reached, except to state that they were reached based on experience and judgment.  What should a judge do then?

The McPhaul opinion does not provide a roadmap for judges to elicit a record which allows them to conclude whether an expert reliably applied a method to the facts of the case.  Scientific sources, such as the PCAST report, do provide that guidance.  Judges should demand full records concerning an expert’s methods and what evidence they relied upon in their analysis, though that still would not entirely open up the black box.  After all, what entitles the expert to conclude, after having observed visual similarities, that two prints are identified as coming from the same source to some degree of likelihood?  Such a conclusion is difficult to verify, even where one knows in greater detail what the inputs were.  That is why, in order to truly address the concerns raised both in McPhaul and the PCAST report, a judge should require that an expert disclose full documentation of every step in their examination process and qualify their conclusions.

In addition, experts should routinely undergo and report the results of blind, rigorous proficiency testing that represents the “full range of latent fingerprints encountered in casework,” and that ensures that any examiner is in fact making accurate judgments.93  Additionally, when these types of forensic evidence are admitted, as the PCAST report noted, jurors should hear about error rates that are “substantial” in the area of fingerprinting, and “likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis.”[94]  Finally, the language used to express results should be appropriate to the principles and methods used.

The reliable application of an expert method to the facts in a particular case remains a neglected prong of Rule 702 and Daubert analysis.  Judges should carefully examine not just whether a method is generally reliable, but the reliability of a particular expert and the work done in a particular case.  The McPhaul decision represents a new judicial focus on what goes on inside the black box.  For that reason, the ruling should send an important signal to practicing lawyers, judges, and forensic practitioners that the reliable application of principles and methods to the facts matters.



[1].       State v. McPhaul, 808 S.E.2d 294 (N.C. Ct. App. 2017).

[2].       See, e.g., Brandon L. Garrett and Gregory Mitchell, How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information and Error Acknowledgement, 10 [small-caps]J. Empirical Legal Stud.[end-small-caps] 484 (2013).

[3].       179 F. Supp. 2d 492 (E.D. Pa. 2001), vacated, 188 F. Supp. 2d 549 (E.D. Pa. 2002) (vacating prior order excluding fingerprint evidence under Daubert).  See also Jennifer L. Mnookin, Fingerprints: Not a Gold Standard, 20 [small-caps]Issues in Sci. & Tech.[end-small-caps] 47 (2003).

[4].       State v. Rose, No. K06-545, 2007 WL 4358047 (Cir. Ct. Md. 2007).

[5].       Press Release, U.S. Attorney’s Office Md., Brian Rose Pleads Guilty to January 2006 Murder of Carjacking Victim (Jan. 11, 2010) https://www.justice.gov/archive/usao/md/news/archive/BrianRosePleadsGuiltytoJanuary2006MurderOfCarjackingVictim.html [https://perma.cc/4KJX-SVPQ].

[6].       See Brandon L. Garrett and Gregory Mitchell, The Proficiency of Experts, 166 [small-caps]U. Penn. L. Rev.[end-small-caps]  (forthcoming 2018).

[7].       Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 591 (1993).

[8].      [small-caps] Fed. R. Evid.[end-small-caps] 702(d) (requiring that “the expert has reliably applied the principles and methods to the facts of the case”); [small-caps]Comm. on Rules of Practice and Procedure, Report of the Advisory Committee on Evidence Rules[end-small-caps] (1999).

[9].       See, e.g., Peter J. Neufeld, The (Near) Irrelevance of Daubert to Criminal Justice: And Some Suggestions for Reform, 95 [small-caps]Am. J. Pub. Health[end-small-caps] S107 (2005).

[10].    Id.; see also Edward K. Cheng & Albert H. Yoon, Does Frye or Daubert Matter? A Study of Scientific Admissibility Standards, 91 [small-caps]Va. L. Rev.[end-small-caps] 471, 503 (2005).

[11].    Paul C. Giannelli, Forensic Science: Under the Microscope, 34 [small-caps]Ohio N. U. L. Rev.[end-small-caps] 315, 322 (2008).

[12].    For a detailed analysis of those rulings, see Garrett and Mitchell, supra note 6.

[13].    United States v. Mitchell, 365 F.3d 215, 240–41 (3d Cir. 2004).

[14].    United States v. Baines, 573 F.3d 979, 989–92 (10th Cir. 2009).

[15].    Simon A. Cole, Grandfathering Evidence: Fingerprint Admissibility Rulings From Jennings to Llera Plaza and Back Again, 41 [small-caps]Am. Crim. L. Rev.[end-small-caps] 1189, 1195 (2004).

[16].    State v. Favela, 323 P.3d 716, 718 (Ariz. Ct. App. 2014) (emphasis added).

[17].    See [small-caps]President’s Council of Advisors on Sci. & Tech., Exec. Office of the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods[end-small-caps] 9–11 (2016) [hereinafter [small-caps]PCAST Report[end-small-caps]]; [small-caps]Am. Ass’n for the Advancement of Sci., Latent Fingerprint Examination: A Quality and Gap Analysis[end-small-caps] (2017) [hereinafter [small-caps]AAAS Report[end-small-caps]].

[18].   See, e.g., Joseph L. Peterson & Penelope N. Markham, Crime Laboratory Proficiency Testing Results, 1978–1991, II: Resolving Questions of Common Origin, 40 [small-caps]J. Forensic Sci.[end-small-caps] 1009 (1995).

[19].    See, e.g., Simon A. Cole, More Than Zero: Accounting for Error in Latent Fingerprint Identification, 95 [small-caps]J. Crim. L. & Criminology[end-small-caps] 985, 1043, 1048 (2005).

[20].    See id. at 1030, 1043, 1048; Jonathan J. Koehler, Fingerprint Error Rates and Proficiency Tests: What They Are and Why They Matter, 59 [small-caps]Hastings L.J.[end-small-caps] 1077, 1077 (2008); Garrett and Mitchell, supra note 6 (describing results of 1990s latent fingerprint proficiency tests); see also, e.g., United States v. Havvard, 117 F. Supp. 2d 848, 854 (S.D. Ind. 2000), aff’d, 260 F.3d 597 (7th Cir. 2001).

[21].    [small-caps]PCAST Report[end-small-caps], supra note 17, at 28.

[22].    [small-caps]Office of the Inspector Gen., U.S. Dep’t of Justice, A Review of the FBI’s Handling of the Brandon Mayfield Case: Unclassified Executive Summary[end-small-caps] 9, 270–71 (2006).

[23].    See [small-caps]Comm. on Identifying the Needs of the Forensic Scis. Cmty., Nat’l Research Council, Strengthening Forensic Science in the United States: A Path Forward[end-small-caps] (2009) [hereinafter [small-caps]NAS Report[end-small-caps]].

[24].    Id. at 142.

[25].    Id.

[26].    Id. at 5–13.

[27].    Id.

[28].    Id. at 5–13, 14.

[29].    [small-caps]PCAST Report[end-small-caps], supra note 17, at 6 (examining the adequacy of scientific standards for, and the validity and reliability of forensic “feature-comparison” methods, specifically, methods for comparing DNA samples, bitemarks, latent fingerprints, firearm marks, footwear, and hair.)

[30].    Id. at 9–10.

[31].    See Brandon L. Garrett and M. Chris Fabricant, The Myth of the Reliability Test, 86 [small-caps]Fordham L. Rev.[end-small-caps] 101 (2018).

[32].    Garrett and Mitchell, supra note 6 (describing how courts rarely consider proficiency when qualifying experts or when examining the reliability of expert methods).

[33].    [small-caps]AAAS Report[end-small-caps], supra note 17, at 11.

[34].    Id.

[35].    Id.

[36].    State v. McPhaul, 808 S.E.2d 294 (N.C. Ct. App. 2017) (No. COA 16-924).

[37].    Substitute Record on Appeal at 10, State v. McPhaul, 808 S.E.2d 294 (N.C. Ct. App. 2017) (No. COA 16-924).

[38].    Defendant-Appellant’s Brief at 4, McPhaul, 808 S.E.2d 294 (No. COA 16-924).

[39].    Id. at 7.

[40].    Id. at 8–9.

[41].    Trial Transcript at 597–98, McPhaul, 808 S.E.2d 294 (No. COA 16-924) (on file with author).

[42].    Id. at 598–99.

[43].    Id. at 599.

[44].    Id. at 601.

[45].    Id. at 602.

[46].    Id.

[47].    Id.

[48].    Id.

[49].    Id. at 604.

[50].    Id.

[51].    Defendant-Appellant’s Brief, supra note 38, at 8.

[52].    Trial Transcript, supra note 41, at 608.

[53].    Id. at 613–15.

[54].    [small-caps]Expert Working Grp. on Human Factors in Latent Print Analysis, Nat’l Inst. of Standards & Tech. & Nat’l Inst. of Justice, Latent Print Examination and Human Factors: Improving the Practice Through a Systems Approach[end-small-caps] 72 (2012).

[55].    See id.

[56].    Id.

[57].    Trial Transcript, supra note 41, at 608.

[58].    Defendant-Appellant’s Brief, supra note 38, at 8, 9.

[59].    Id. at 27.

[60].    Trial Transcript, supra note 41, at 624.

[61].    Id. at 625.

[62].    Id. at 631.

[63].    Defendant-Appellant’s Brief, supra note 38, at 9.

[64].    Id. at 9–10.

[65].    Trial Transcript, supra note 41, at 632.

[66].    Id. at 633.

[67].    Id. at 634.

[68].    Id.

[69].    Id.

[70].    Id. at 638.

[71].    Id. at 645.

[72].    Substitute Record on Appeal, supra note 37, at 62.

[73].    Id. at 96.

[74].    Defendant-Appellant’s Reply Brief at 3, State v. McPhaul, 808 S.E.2d 294 (N.C. Ct. App. 2017) (No. COA 16-924).

[75].    Id. at 5.

[76].    Brief for the State at 14, McPhaul, 808 S.E.2d 294 (No. COA 16-924).

[77].    Id. at 17.

[78].    Id. at 18.

[79].    Defendant-Appellant’s Brief, supra note 38, at 30–31 (citing United States v. Crisp, 324 F.3d 261, 269–70 (4th Cir. 2003)); see also United States v. Baines, 573 F.3d 979, 990 (10th Cir. 2009).

[80].    See State v. Ward, 694 S.E.2d 738, 743 (N.C. 2010) (noting “the field of forensic science had come under acute scrutiny on a nationwide basis”).

[81].    Defendant-Appellant’s Brief, supra note 38, at 31 (quoting [small-caps]PCAST Report[end-small-caps] at 10).

[82].    Defendant-Appellant’s Reply Brief, supra note 74, at 8.

[83].    Id. at 10.

[84].    State v. McPhaul, 808 S.E.2d 294, 305 (N.C. Ct. App. 2017) (quoting the expert testimony).

[85].    Id. at 303–304.  The North Carolina rule states: "(a) If scientific, technical or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion, or otherwise, if all of the following apply: (1) The testimony is based upon sufficient facts or data. (2) The testimony is the product of reliable principles and methods. (3) The witness has applied the principles and methods reliably to the facts of the case." [small-caps]N.C. Gen. Stat.[end-small-caps] § 8C-1, Rule 702.

[86].    McPhaul, 808 S.E.2d at 304.

[87].    Id. at 305.

[88].    Id. (finding error to be harmless, however, given other evidence in the case tying the defendant to the crime scene).

[89].    Id.

[90].    [small-caps]PCAST Report[end-small-caps], supra note 17, at 135.

[91].    [small-caps]AAAS Report[end-small-caps], supra note 17, at 11.

[92].    Id.

[93].    [small-caps]PCAST Report[end-small-caps], supra note 17, at 149.

[94].    Id.

About the Author

White Burkett Miller Professor of Law and Public Affairs, Justice Thurgood Marshall Distinguished Professor of Law, University of Virginia School of Law

By uclalaw
/* ]]> */