Beyond Cross-Examination: A Response to Cheng and Nunn

Response - Online Edition - Volume 97

In their thoughtful and important article, Beyond the Witness: Bringing A Process Perspective to Modern Evidence Law, Ed Cheng and Alex Nunn make a convincing case that existing evidence law’s focus on witnesses is antiquated and wrongheaded.[1] As they explain, modern proof consists increasingly of evidence that involves standardized processes rather than what the authors call “ad hoc” human judgment. Whereas a trial in 1719 centered mainly around eyewitnesses, confessions, and the like, a trial in 2019 often centers just as much or more so around the reliability of forensic reports, chemical tests, and machine-generated records. In turn, the rules of evidence don’t quite know what to do with such “process-based” proof. The safeguards created for human witnesses—cross-examination, physical confrontation, and the oath—are poor fits for evidence that has a significant process component.

Notwithstanding Cheng and Nunn’s compelling case for better rules to govern process-based proof, their Article contains two misconceptions that, I think, hinder the Article’s chances of being as persuasive and impactful as it could be. Those two misconceptions are: (1) each piece of evidence has one “actual source” that is either a person or a process, and the designation of evidence as either “person-based” and “process-based” should thus determine which of two sets of evidentiary safeguards applies;[2] and (2) cross-examination and other trial safeguards are the most appropriate means of testing and ensuring the reliability of “person-based” evidence, and thus the “world of witnesses” should “remain[] intact” even after the “process-based revolution” envisioned by the authors.[3]

In the pages that follow, I explain why these are misconceptions and why they matter to the future of evidence law. In doing so, however, I also hope to convey to the reader why Cheng and Nunn’s contribution is nonetheless so welcome and critical. I ultimately offer an alternative way of thinking about all evidence, one that moves beyond an artificial dichotomy between “persons” and “processes” and instead focuses on ensuring that the jury has whatever contextual information it needs to meaningfully assess the probative value of the evidence. As it turns out, many of the wonderful, concrete suggestions offered by Cheng and Nunn for scrutinizing “process-based” evidence—beyond cross-examination, toward enhanced discovery and other tools—would be equally beneficial if applied to human testimony.

I. The False Dichotomy Between Person-Based and Process-Based Evidence

Cheng and Nunn’s article divides the world into “person-based” and “process-based” evidence. For example, they state that a forensic hair analyst’s report “does not involve process-based evidence at all.”[4] Instead, it is “witness-based, plain and simple.”[5] As a result, in the authors’ view, only the “conventional” courtroom safeguards need apply to it.[6] On the other hand, the authors describe the DNA profile developed from a rape kit by Cellmark Laboratories in Williams v. Illinois[7] as entirely process-based evidence, not person-based.[8] Thus, in the authors’ view, such evidence need not be presented through the live testimony of a Cellmark analyst and need only be dealt with through safeguards for process-based proof.[9]

As it turns out, however, the universe of proof cannot be so neatly divided. Instead, the conveyance of information generally involves a complex mixture of sources—what some scholars have called “distributed cognition.”[10] For example, a forensic hair analyst’s report combines ad hoc judgment and observation; extensive training on a method that is subjective but involves standardized elements; studies performed by others; learned treatises; the advice of mentors; protocols of the office in which the expert works; and a cognitive process that might be tainted in predictable and well-studied ways if the analyst has looked at the reference sample before or while interpreting the characteristics of the questioned sample.[11] Likewise, the Cellmark analyst’s DNA report in Williams involved not only standardized machine processes but also standardized human-created protocols about which peaks to treat as computer noise and which to treat as true genetic markers as well as human ad hoc judgments about when to override those standardized protocols based on special circumstances.[12] To take an example beyond men and machines, imagine an accelerant-detection dog that alerted on a particular substance found in a home after a fire. The alert may be a combination of the dog’s ad hoc judgment; the dog’s training and experiences; and the dog handler’s conscious or unconscious signaling. To call the Cellmark analyst’s testimony or dog alert “process-based” is in one sense accurate but fails to capture the importance of the non-standardized human or animal inputs in each. On the other hand, to call the forensic hair analyst’s testimony “person-based” is in one sense accurate but fails to capture the standardized aspects of her testimony that might be more important to meaningfully scrutinizing her claims.

Even regular, old lay-human testimony, unaided by instruments, often involves standardized processes or more than one person’s cognition. An eyewitness who identifies a suspect from a photographic array, for example, conveys information that is not only the product of the witness’s observations and judgment but also of the arrangement and operation of the photographic array (such as the use or nonuse of sequential unmasking) and the cognitive process of recalling a previous memory upon seeing a photograph. For precisely these reasons, the Innocence Project has successfully argued that eyewitness testimony should be treated more like “trace evidence,” subject to safeguards built for physical evidence, rather than simply a human assertion subject to courtroom safeguards.[13] Moreover, Federal Rule of Evidence 703, which allows experts to testify to opinions based on hearsay or other inadmissible evidence,[14] reflects an implicit understanding that all expert testimony involves distributed cognition. Without Rule 703, an opponent might be able to insist upon scrutinizing every person or thing the expert relied on in rendering their opinion, from the expert’s graduate advisor to the articles the expert has read to the assertions of informants whom the expert has relied on for background information.

To be sure, the fact that Cheng and Nunn’s person/‌process dichotomy might be a false one does not necessarily render their reform proposals any less compelling. After all, their central insight that courtroom safeguards are inadequate to scrutinize the process-based aspects of modern proof is spot on, and their suggested reforms would improve both accuracy and efficiency. For example, their call for a “process-based hearsay rule” that would allow opponents to “directly scrutinize” standardized processes is well taken.[15] And they are surely right that evidence law currently “fetishize[s] the witness” by its irrational insistence upon live human testimony as a prerequisite to admitting largely standardized and well-contextualized evidence such as business records and learned treatises.[16]

But the authors’ insistence upon this dichotomy does lead them to conclude, incorrectly in my view, that courtroom safeguards are a waste of time whenever evidence has a significant, standardized process-based element. For example, they argue that Melendez-Diaz v. Massachusetts,[17] holding that a forensic drug chemist’s affidavit certifying the reliability of a machine result was inadmissible under the Confrontation Clause absent the chemist’s live testimony, was wrongly decided because the evidence was simply process-based.[18] But the evidence was not simply process-based. Rather, it was a product of distributed cognition: a machine result but also a human analyst’s personal assurance, under oath, that the machine was properly calibrated, cleaned, and maintained upon use.[19]

Of course, the Melendez-Diaz chemist’s personal assurance was itself somewhat standardized, given that the chemist performs and memorializes this ritual as a routine practice. The authors are surely right that the chemist has no memory, on the witness stand months later, of this case. But the fact that cross-examination is not likely to be fruitful does not mean that the analyst’s assurances (above and beyond the machine result) need not be scrutinized. Rather, it simply means that such assurances cannot be meaningfully scrutinized through cross-examination.[20] Assuming that is true, we should either be more creative in fashioning safeguards for such human assertions, such as mandatory preservation and discovery of maintenance and contamination logs, or satisfy ourselves that the jury has enough information to assess the probative value of the assertion without further safeguards. Put differently, the answer should be to think about what the jury needs to accurately analyze the claims made, rather than to artificially label the entire testimony as “process-based” and categorically immunize the ad hoc, non-standardized element from scrutiny.

II. The Unfounded Assumption That Courtroom Safeguards Are Still the Best Means of Scrutinizing Witness Credibility

The authors’ strict person/‌process dichotomy also leads them, ironically, to undersell their own reforms by declining to apply their proposals to process-based elements of otherwise “person-based” human testimony. Even after offering what they call a “somewhat radical” rethinking of evidence law, the authors end up being largely conventional with respect to human testimony.[21] Specifically, they assume that the “world of witnesses,” and the world of courtroom safeguards as the sole means of scrutinizing “person-based” evidence, should “remain[] intact.”[22]

But assuming that human testimony, like all evidence, is often a complex mixture of ad hoc personal judgment and standardized processes, it should also be clear that human testimony could benefit from many of the safeguards Cheng and Nunn suggest for process-based evidence. For example, a defendant facing an eyewitness identification from a photographic array should presumably be given more than simply the opportunity to cross-examine the witness in open court. Surely more important to the jury’s ability to assess the accuracy of the identification would be “enhanced discovery”[23]—the authors’ term—with respect to the photographic array procedure, how it was done (and ideally a videotape or other memorialization of it), a copy of the protocols, and access to expert testimony on subjects like cross-racial identification to the extent it is relevant. These are the tools the jury likely needs to better assess the evidence, more so than a dramatic but ultimately fruitless cross-examination of a likely sincere eyewitness.

The authors’ call for more black-box testing and discovery of error rates would also be critical not just for machine results, but for subjective human determinations as well. Even the authors acknowledge that black-box testing “work[s] for a human-involved process as well,” such as a perfume tester who differentiates between fragrances using his nose.[24] If the perfume tester “simply smelled” two fragrances “and made a determination without any standards or testing, that would fail the testing factor.”[25] But curiously, the conclusion the authors draw from this example is that “[t]hat kind of subjective observation is more appropriately dealt with as witness-based evidence.”[26] But why? Wouldn’t the nose-tester’s error rate be precisely what the jury would need to better assess the probative value of his testimony, rather than a generic cross-examination about the lack of standards?

This concern is far from hypothetical; most forensic methods other than DNA typing are subjective methods that rely heavily on the examiner’s human judgment and experience. The recent PCAST report noted how critical black-box testing and false positive rates are to assessing the validity of such methods.[27] The response of the National District Attorney’s Association (NDAA) to the PCAST report was that these methods are not necessarily “scientific” enough to require a showing of “scientific validity” through black-box testing.[28] But even the NDAA did not suggest that cross-examination of the expert alone should be sufficient.[29]

It is true that the accuracy of some subjective conclusions is not easily testable through validation studies. The authors use the example of a perfume tester who “reports smelling apples with hints of raspberry,” concluding that because such a subjective pronouncement is difficult to test through black-box testing, it must be “witness-based, not process-based.”[30] But software also sometimes produces results not easily tested against a ground truth, such as credit scores or, in the forensic DNA context, likelihood ratios. While black-box testing can tell you how many times a DNA expert system incorrectly labels a non-contributor as a contributor, it cannot as easily tell you whether the likelihood ratio of 10 trillion is off by a factor of ten.[31] And yet, Cheng and Nunn would surely label such software-rendered opinions “process-based.”

The principle that emerges from these examples, with which I doubt Cheng and Nunn would ultimately disagree, is that the need for black-box testing and error rates should not turn on whether the evidence is predominantly “person-based” or “process-based.” Rather, the need for black-box testing should turn on whether false positive rates through black-box testing would give the jury contextual information critical to assessing the probative value of the evidence, however one might want to label it. There will be standardized elements of evidence—like machine-generated credit scores—that are not easily testable through black-box studies and that will require more creative safeguards, such as robust testing and disclosure of the key assumptions underlying the software. And there will be ad hoc elements of evidence—like the subjective component of forensic hair analysis—that are, in fact, conducive to being tested through black-box studies.

III. Beyond Cross Examination: Toward a “Black Box” Approach to All Evidence

What would happen if we eschewed labels such as “person-based” and “process-based,” and the separate regimes of safeguards that flow from those labels, and instead simply thought of evidence as “evidence,” most of which is a complex mixture of ad hoc judgments and standardized processes? In turn, what if we determined what safeguards are necessary for various categories of evidence not based on whether we think of them as “person-based” or “process-based,” but simply on whether the jury needs further contextual information to open up the “black box” and fairly determine the evidence’s probative value?[32]

Such a “black box” approach to all evidence would lead to a more coherent statutory and constitutional evidence law. Take, for example, the Confrontation Clause. Cheng and Nunn argue, correctly in my view, that the right of confrontation should mean more than simply courtroom safeguards, because such safeguards are largely meaningless as a way of directly scrutinizing process-based proof.[33] In so arguing, Cheng and Nunn correctly recognize that the word “witness”—a term unquestionably associated in the Framers’ minds only with humans—might include non-persons.[34] But they fail to take their analysis to its logical conclusion: the right of confrontation in general, even when applied to human sources, should be broader than simply the courtroom safeguards of the oath, physical confrontation, and cross-examination. If one takes Cheng and Nunn’s arguments seriously, then one should agree that the Sixth Amendment should guard not only against wrongful convictions stemming from an inability to cross-examine a human witness, or from under-scrutinized machine results, but from any “black box” evidence, whether human or machine or animal or a combination of those, where the jury lacks contextual information necessary to meaningfully assess the evidence’s probative value.

Such a broader view of confrontation would not pose insurmountable barriers to admission of evidence any more than the authors’ already “somewhat radical” proposals. For example, the admission of hearsay mercantile records shown to be the result of a routine business practice would still pose no Confrontation Clause or inadmissible hearsay problem because the jury has sufficient context to assess such evidence without the live testimony of the declarant or further information about the process. On the other hand, when the probative value of human testimony rests largely on standardized processes that cannot easily be scrutinized in the courtroom, such as an eyewitness identification resulting from a photographic array procedure, then the right of confrontation might need to include the right of access to information about the procedure, or the right to consult an expert who can shed light on recurring cognitive limitations of eyewitnesses under particular circumstances. Or the right of confrontation might include a right of access to prior statements of government witnesses or declarants on the same subject matter, rather than simply the right of cross-examination.[35]

A constitutionally enshrined right to “enhanced discovery” with respect to human witnesses might seem radical at first, but perhaps less so if one recognizes cross-examination and physical confrontation as merely forms of discovery themselves, albeit discovery that occurs in real time in the courtroom rather than pretrial in an office or laboratory. If the ultimate goal of confrontation is to ensure that the jury has sufficient contextual information to assess the government’s evidence, then no particular form of discovery—pretrial, trial, or otherwise—would seem to hold any sacred status in that regard. The only way to justify a narrow conception of confrontation as requiring only courtroom safeguards is historical, focused on the Framers’ understanding of the common-law right of confrontation. But by recognizing that “witnesses” can include non-humans and that confrontation should include “enhanced discovery” about standardized processes, Cheng and Nunn have already justifiably moved beyond what confrontation looked like in 1791.[36]

For the record, though, fetishizing cross-examination would have been unjustified even in 1791, and perhaps the Framers of the Sixth Amendment were indeed thinking broader.[37] In any event, one might forgive the courts and commentators of yore for a myopic focus on cross-examination. After all, a defendant in 1791 did not have the chance to offer an expert witness on cross-racial identification or the suggestivity of photographic array procedures that fail to use sequential unmasking. As our understanding of human cognition grows, the contextual information that a jury needs to know to assess human testimony also grows. Cheng and Nunn are absolutely right that “[t]he legal system ought not to fetishize the witness.”[38] But the legal system also ought not to fetishize cross-examination.

***

At the end of the Article, the authors wonder whether human witnesses will be phased out entirely in favor of “a systems approach,” a term they apparently use to mean a shift toward standardization.[39] But a “systems approach,” to a NASA engineer, means something very different. It means an interface—a cyborg, if you will—in which man and machine work together.[40] We do not need to reach into science fiction for this framing; as this response has attempted to argue, we have always had trial by cyborg. That is, most witnesses—human, machine, or animal—convey information based on distributed cognition, relying on a complex mixture of ad hoc judgments and more standardized cognitive or mechanical processes. Cheng and Nunn’s Article offers an important and exciting contribution to imagining how to best safeguard against inferential error from standardized processes. Taking their arguments seriously, the next step will be to eschew the person/‌process dichotomy entirely and focus on guarding against inferential errors from all types of evidence, whatever their combination of human, mechanical, animal, and natural sources. If—and only if—we do that, will we be living up to the spirit underlying the Sixth Amendment and building a coherent evidence code for the modern age.

  1. .Edward K. Cheng & G. Alexander Nunn, Beyond the Witness: Bringing a Process Perspective to Modern Evidence Law, 97 Texas L. Rev. 1077 (2019).
  2. .See id. at 1089–91.
  3. .See id. at 1091, 1122.
  4. .Id. at 1111.
  5. .Id.
  6. .Id.
  7. .567 U.S. 50 (2012).
  8. .Cheng & Nunn, supra note 1, at 1112.
  9. .Id.
  10. .See, e.g., Itiel E. Dror & Jennifer L. Mnookin, The Use of Technology in Human Expert Domains: Challenges and Risks Arising from the Use of Automated Fingerprint Identification Systems in Forensic Science, 9 Law, Probability & Risk 47, 48–49 (2010) (explaining “distributed cognition” in the context of human expert testimony). The term was coined by cognitive scientist Edward Hutchins and colleagues at the University of California San Diego in the early 1990s. See Yvonne Rogers & Judi Ellis, Distributed Cognition: An Alternative Framework for Analysing and Explaining Collaborative Working, 9 J. Info. Tech. 119, 119 (1994) (noting that Hutchins and his colleagues developed the concept and citing several of their articles).
  11. .See generally Itiel Dror, Biases in Forensic Experts, 360 Sci. 243 (2018) (explaining the psychological mechanisms leading to contextual bias in forensic experts).
  12. .See, e.g., Roberts v. United States, 916 A.2d 922, 933–34 (D.C. Cir. 2007) (discussing the FBI’s default interpretation settings and noting that the analyst’s decision to override the default “stutter” threshold was dispositive in Roberts’s case as to whether he was the source of the rape kit DNA); John M. Butler, Advanced Topics in Forensic DNA Typing: Interpretation 169–73 (2015) (demonstrating the need for analyst judgments on the presence or absence of allelic drop-in and drop-out). See generally Erin Murphy, The Art in the Science of DNA: A Layperson’s Guide to the Subjectivity Inherent in Forensic DNA Typing, 58 Emory L.J. 489 (2008) (discussing the role of subjective analyst judgment in determining forensic DNA profiles).
  13. .See Brief for The Innocence Project as Amicus Curiae Supporting Respondent at 30–33, State v. Henderson, 27 A.3d 872 (N.J. 2011) (No. 62,218) (arguing that eyewitness testimony is analogous to physical “trace” evidence that requires a chain-of-custody approach).
  14. .Fed. R. Evid. 703.
  15. .Cheng & Nunn, supra note 1, at 1109.
  16. .Id. at 1122.
  17. .557 U.S. 305 (2009).
  18. .Cheng & Nunn, supra note 1, at 1094–95, 1110; see Melendez-Diaz, 557 U.S. at 309–11, 329. The authors incorrectly describe Melendez-Diaz as holding that “the prosecution [must] produce a live witness when presenting forensic evidence.” Cheng & Nunn, supra note 1, at 1110. On the contrary, the Court rejected the state’s attempt to portray the chemist’s affidavit as merely process-based. See Melendez-Diaz, 557 U.S. at 318–19 (noting the potential for forensic fraud and other motivations for the chemist to make false claims); see also Bullcoming v. New Mexico, 564 U.S. 647, 659–60 (2011) (rejecting the lower court’s conclusion that the blood analyst who authored the hearsay certification was a “mere scrivener” restating the machine result).
  19. .See Melendez-Diaz, 557 U.S. at 320 (noting the only evidence from the analyst was an affidavit with “the bare-bones statement that ‘[t]he substance was found to be cocaine: Cocaine.’”); id. at 333 (Kennedy, J. dissenting) (highlighting all who contribute to such a result including one who ensures the machine used was properly calibrated).
  20. .David Sklansky notes that none of the wrongful convictions based on misleading forensic evidence involved hearsay evidence, and he argues that Crawford’s and Melendez-Diaz’s conception of confrontation is a cramped one that overly emphasizes live testimony rather than what would actually help defendants—greater access to laboratory testing and processes themselves. See David Sklansky, Hearsay’s Last Hurrah, 2009 Sup. Ct. Rev. 1, 18, 49-52, 55, 72–74.
  21. .See Cheng & Nunn, supra note 1, at 1122.
  22. .Id.
  23. .See id. at 1080.
  24. .Id. at 1115.
  25. .Id.
  26. .Id.
  27. .President’s Council of Advisors on Science and Technology, Report to the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature–Comparison Methods 48–49 (Sept. 2016) (observing the importance of black box testing for subjective methods relying on “substantial human judgment”).
  28. .Letter from National District Attorney’s Association to President Barack Obama in Response to PCAST Report 2–3 (Nov. 16, 2016), www.ciclt.net/ul/ndaajustice/
    PCAST/NDAA%20PCAST%20Response%20FINAL.pdf [https://perma.cc/768Z-ET7F].
  29. .To say that cross-examination is insufficient is not to say that it is unnecessary; cross-examination on subjectivity and bias can matter. See, e.g., William C. Thompson & Nicholas Scurich, How Cross-Examination on Subjectivity and Bias Affects Jurors’ Evaluations of Forensic Science Evidence, 64 J. Forensic Sci. (forthcoming 2019), https://papers.ssrn.com/sol3/
    papers.cfm?abstract_id=3320824 [https://perma.cc/A23T-PEAG].
  30. .Cheng & Nunn, supra note 1, at 1116.
  31. .See Christopher D. Steele & David J. Balding, Statistical Evaluation of Forensic DNA Profile Evidence, 1 Ann. Rev. Stat. & Its Application 361, 380 (2014) (noting that a black box study is “infeasible for software aimed at computing a[] [likelihood ratio] because it has no underlying true value (no equivalent to a true concentration exists)”).
  32. .I am, of course, not the first to think of evidence law in this way. See, e.g., Eleanor Swift, A Foundation Fact Approach to Hearsay, 75 Cal. L. Rev. 1339, 1343 (1987) (arguing for an approach to hearsay that focuses on giving factfinders sufficient context about a statement’s meaning, rather than on excluding unreliable assertions).
  33. .See Cheng & Nunn, supra note 1, at 1110–11.
  34. .See id. at 1088–90 (noting that witness could include non-persons).
  35. .See, e.g., Jencks v. United States, 353 U.S. 657, 668–69 (1957) (requiring, pursuant to the Court’s supervisory power, that the government produce prior statements of its witnesses); Palermo v. United States, 360 U.S. 362 (1959) (Brennan, J., concurring) (stating that Jencks had “constitutional overtones” grounded in the “common-law rights of confrontation”). But see Jencks Act, 18 U.S.C. § 3500 (2016) (superseding Jencks and placing temporal and subject matter limits on the right to discover prior statements of witnesses).
  36. .See Cheng & Nunn, supra note 1, at 1080.
  37. .See, e.g., Ronald J. Allen, From the Enlightenment to Crawford to Holmes, 39 Seton Hall L. Rev. 1, 12 (2009) (“There is no reason to think that the Sixth Amendment reflects a fetish for cross-examination rather than a concern about reliability during a time when unreliable outcomes were relatively easy to manufacture.”).
  38. .Cheng & Nunn, supra note 1, at 1122.
  39. .Id. at 1123–24.
  40. .See, e.g., Andrea Roth, Trial by Machine, 104 Geo. L.J. 1245, 1252 (2016) (advocating a “trial by cyborg” or “systems” approach to criminal adjudication). See generally David A. Mindell, Digital Apollo: Human and Machine in Spaceflight (2008) (describing the history of human-machine interface in space travel).