Predatory Journals, Retractions, and Manipulation: Problems with and Consequences of Relying on Peer Review in Evaluating Expert Testimony

Article - Online Edition - Volume 103

 

Introduction

Judges serve as gatekeepers for scientific evidence. In 2022, the Advisory Committee on Evidence Rules tweaked Rule 702 of the Federal Rules of Evidence to emphasize that role.[1] To perform their gatekeeping function, judges consider a range of factors intended to guide them towards a useful evaluation of reliability as to a proposed expert witness’s methodology.

This task requires judges, who often have no scientific or mathematical background, to sift through competing views of largely ambiguous factors—except for, it seems, one factor: peer review, which seems more concrete. Courts are likely to continue to find peer-reviewed publication appealing as a partial proxy for legitimacy for a simple reason: Peer review (it seems) is either there or it isn’t, and it’s understandable for courts to grab onto that bit of certainty, that apparent binary.

Except peer review isn’t certain, and it isn’t binary, and the literature universe is a bit of a mess. Indeed, experts may be able to manipulate or outright fabricate the presence and extent of peer review. The term “peer review,” in fact, reflects a broad spectrum of activities. It can mean extensive and detailed review of the methodology (and its application) by actual peers for legitimate journals. But it also sometimes means review that isn’t just sparse but non-existent, where authors put money into a journal-shaped machine to get their article out in something that looks like a legitimate publication—maybe close enough to get credited as one if not scrutinized carefully. And there’s a lot in the middle, where there are some (kind of) peers providing some (sort of) review, but the depth, scope, and nature of the review can vary wildly, as can its ability to catch errors and misleading assertions, whether intentional or inadvertent. The dramatic increase in retractions (over 10,000 last year alone) is additional evidence that an article having the “Peer Review” stamp on it should properly be the start of the inquiry, not the end.[2]

Here, I address two consequences of this reliance by courts on peer review as a partial proxy for reliability. After a review of the state of how courts evaluate expert testimony, I discuss the overall state of the medical and scientific literature, including what peer review is and what it isn’t, and the pernicious growth of predatory journals and “paper mills.” Predatory “journals” (they don’t really deserve the moniker) developed to exploit scholars who need publications to advance (hence “predatory”), but also can be exploited by would-be expert witnesses (and the lawyers who hire them) to advance a litigation interest. This is good for neither the scientific literature nor (if not caught) for advancing the goals of litigation. Second, I look at a consequence of peer review being an important part of the admissibility question: parties seeking discovery into the peer review process, including identifying those doing the peer review and accessing correspondence with reviewers. I conclude with some ideas to help give scholarly work weight appropriate to what undergirds it, while protecting journals’ legitimate interests.

I. Daubert, The Old Rule 702, and the New(‑ish) Rule 702

Courts have grappled with how to deal with expert witnesses for as long as there have been witnesses seeking to provide opinions, and certainly this isn’t the place to survey that history.[3] Briefly though, Daubert v. Merrell Dow Pharmaceuticals[4] emphasized that trial courts were to serve as gatekeepers for scientific evidence,[5] and that courts should focus on experts’ methodology (rather than simply on their general area of purported expertise and its general acceptance).[6] The Court set forth non-exclusive criteria for consideration—testability,[7] peer review,[8] error rate,[9] control standards, and (echoing Frye) general acceptance.[10] The Court was trying to tell courts how to recognize “scientific” work in ways similar to those that the Court believed scientists did.[11] Specifically as to peer review, the Court said that “[t]he fact of publication (or lack thereof) in a peer reviewed journal . . . will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”[12]

Those criteria carried over to the amendment to Rule 702 of the Federal Rules of Evidence, along with some more: whether the expert’s work is litigation-specific; whether the expert is extrapolating improperly; whether the expert’s litigation work is similar to their usual work in terms of care; whether the expert has accounted for alternative explanations adequately; and whether the expert’s field in general is known for reaching reliable results.[13]

In the years since Daubert and Rule 702’s expansion on it, the law around experts has continued to evolve, and in late 2023, Rule 702 was amended again. The amendments addressed two issues: First, they emphasized that the standard for admissibility is a preponderance of the evidence and that the burden rests on its proponent to meet that burden.[14] This update rejects the notion that the sufficiency of the basis of the opinion is a matter of weight of the evidence rather than admissibility.[15] The second aspect of the amendments emphasizes that the expert’s opinion must “stay within the bounds of what can be concluded from a reliable application of the expert’s basis and methodology,” noting that “[j]udicial gatekeeping is essential because just as jurors may be unable, due to lack of specialized knowledge, to evaluate meaningfully the reliability of scientific and other methods underlying expert opinion, jurors may also lack the specialized knowledge to determine whether the conclusions of an expert go beyond what the expert’s basis and methodology may reliably support.”[16] Put simply, this second part requires that the court also conclude that the opinion’s advocate must also establish that the application of the methodology is reliable (not just the underlying methodology).

So where does that leave us or, more to the point, where does that leave judges considering expert testimony? They are still gatekeepers (even more emphatically so), and they’ve been told specifically that they ought not just wave experts through, relying on juries to sort it out. They’re expressly directed not to say that it all goes to weight rather than admissibility. They also still have this list of factors to consider, none of which are controlling and all of which are at least a little wiggly—except, maybe, peer review, which, again, seems like it’s either there or it’s not.

II. An Incomplete Look at the State of Medical and Scientific Literature

A. Peer Review (The Mostly Good Version)

First, what do we mean when we talk about peer review?[17] The most common current image of peer review—reviewers, usually anonymous, evaluating the methodology and analysis of a study in greater or lesser depth—developed over perhaps the last 75 years.[18] The reviewers, even in the most idealized version of review, don’t re-do experiments and usually don’t dig into the underlying data (as opposed to the data post-analysis). The question presented for the reviewers is basically whether the methods were appropriate to the task, and if the conclusions were reasonable and interesting. Sometimes this review is not anonymous (indeed, in some cases the authors may be in an active discussion with the reviewers)[19] and the extent of time dedicated to the task can vary considerably, with the average being perhaps a couple of hours.[20] Nature, as just one example, asks its reviewers to provide feedback about (1) key results, (2) validity, (3) originality and significance, (4) data and methodology, (5) appropriate use of statistics and treatment of uncertainties, (6) conclusions, (7) suggested improvements, (8) references, and (9) clarity and context.[21] An increasing number of journals provide open review of various sorts, where review is opened to a broader range of people or even to the general public.[22]

Whichever the model, if done properly by appropriate reviewers, many researchers consider peer review to be highly supportive, but not dispositive, of the quality of the work. Given that, one can see the appeal to judges—who really are answering largely the same questions as the reviewers and those in science who rely upon peer review. But the key point here is that, even among journals that genuinely do perform something called peer review, the term contains multitudes—a range of activities.

Further, the extent to which the review is being done as deeply and thoroughly as those factors might suggest, or to the extent to which it prevents weaker articles from being published, is at least uncertain. In 2001, when, as described infra, the number of journals was considerably smaller than it is today and predatory journals were merely a hopeful glimmer in some creative charlatan’s eye, a commentator noted:

There are thousands of scientific and medical journals in the world . . . and many cannot fill their pages. The resulting seller’s market means that a researcher can publish even an inadequate article somewhere. Serious and adequate publication peer review remains relatively rare. Even adequate publication peer review is sometimes limited in that the review may involve only one or two peer reviewers, and even the best reviewers can only identify gross errors in methodology or conclusions.[23]

How much has the world of publishing grown in the nearly quarter century since that comment? First, consider the journals on the more legitimate end of the spectrum: When the predecessor to Medline (a database of medical and related articles maintained by the United States National Library of Medicine) first went online in 1971 for remote access, it covered 239 journals.[24] In 2000, that database had 3,419 journals.[25] In 2023, it covered over 5,200 journals.[26] As of this writing, the PubMed journal list (broader than Medline, incorporating both Medline publications as well as life science journals and online books) has a staggering 53,750 publications listed.[27] It is, in short, even more of a seller’s market than it was in 2001.

And, to be clear, even that 53,000+ number does not include everything that gets published that calls itself a journal. As just one example, the publisher MedCrave’s Urology & Nephrology Open Access Journal does not appear in the list, despite being active (it published an article a couple of weeks ago as of this writing).[28] As will become clear below, that journal’s absence from at least some searches is (in my view) a plus for researchers.[29] While it’s impossible to quantify, there are unquestionably many more at-best-questionable journals that are out there publishing articles that are not listed in either Medline or PubMed but that are still generating countless articles yearly. MedCrave alone claims to publish over 100 journals. A quick check suggests that most of MedCrave’s journals are not included in the PubMed list of journals.

Put briefly, the answer to the question of how many journals there are is: a lot! And it’s probably still growing.

B. Peer Review (The Non-Existent Version)

When Jeffrey Beall (a librarian) coined the term “predatory journal” in 2010,[30] he probably was not expressly thinking about the 1987 film Predator (nor its sequels, canonical or otherwise),[31] but there are some commonalities. As the hunters in the films camouflage themselves to blend with their backgrounds, the journals camouflage themselves as legitimate medical or scientific outlets, and they disguise their editorial voices to sound like real journals, too. But it’s just a façade, where if you put hundreds or thousands of dollars in, you’ll get a reprint that looks like a scholarly article and can be cited like a scholarly article. But in fact, the resulting article has gone through none of the peer review or editing that legitimate journals provide. Beall termed them “predatory” because of the fact that they prey mostly on people whose careers depend on publishing—junior scholars in particular. These authors may not even be aware that they are submitting to a predatory journal, in the same way that Major Alan “Dutch” Schaefer (played by Arnold Schwarzenegger) was unaware that he and his fellow soldiers were being stalked by cloaked alien hunters.

Beall’s original definition of the term identifies organizations that “publish counterfeit journals to exploit the open-access model in which the author pays. These predatory publishers are dishonest and lack transparency. They aim to dupe researchers, especially those inexperienced in scholarly communication.”[32] “Some common forms of predatory publishing practices include falsely claiming to provide peer review, hiding information about Article Processing Charges (APCs), misrepresenting members of the journal’s editorial board, and other violations of copyright or scholarly ethics.”[33]

Every few months, it seems, another example pops up of an absurd article being accepted and published without question. In the early 2000s, two New York University computer science professors created an article entitled “Get me off Your Fucking Mailing List,”[34] which consisted entirely of that sentence repeated hundreds of times, including in flowchart and other graphical forms. It was accepted, with a purported peer review rating of “excellent,” by the International Journal of Advanced Computer Technology[35]—which published as recently as 2023.[36] Similarly, if less vulgarly, the Journal of Natural Pharmaceuticals accepted an article authored by the entirely fictional Ocorrafoo Cobange, describing a potential new cancer treatment based on a substance in a lichen. The problem, in addition to the author not existing, was that the paper’s supposed experiments were “so hopelessly flawed that the results are meaningless.”[37] And perhaps my favorite example involved a science writer, John McCool, publishing a case report about uromycitisis poisoning in the Urology & Nephrology Open Access Journal, published by the MedCrave Group. The main problems with the article were (a) McCool is not a doctor, much less a urologist, (b) uromycitisis poisoning isn’t a real thing, (c) in fact, uromycitisis poisoning was made up in an episode of Seinfeld, and (d) any credible urologist being used to provide peer review to such an article would have recognized it as fictional.[38]

It is impossible to know with precision how many predatory journals there are, not least because the definition is imprecise. Beall’s original list includes over 1,100 publishers as of 2016, with an additional 163 added in 2021; presumably most of those publishers produce or produced more than one journal.[39] And some of them, to be sure, may be legitimate in some sense—depending on how you define the term.

Here we are, then: we have a lot of journals of varying legitimacy (including many with no legitimacy at all!) and those journals’ presumably gaping maws wish to be fed with articles, perhaps regardless of the quality.

C. Even More Supply

There’s ever more reason to be skeptical on the quality question: In 2023, a record number of research papers was retracted, per Nature—over 10,000 of them.[40] Retraction Watch, a project of the Center for Scientific Integrity, has gone from being something followed mostly by people like me[41] at its start in 2010 to being a genuine force in science and medicine today. Its database (which is extensive but not exhaustive) shows 3,624 retractions in the calendar year 2020; 5,091 in 2021; 6,397 in 2022; and, more or less consistent with Nature’s number, a whopping 10,154 in 2023.[42]

Probably the biggest contributor to the numbers of retractions (and, perhaps, to the growth of journals as a whole) is the rise of what are alternately called “paper mills” or “criminal science publishing gangs.”[43] These outfits write papers for customers, who are presumably scholars, or those who wish to be scholars, seeking to advance their careers without doing the work required. The mills/gangs apparently use professionals to draft the papers, and the resulting work product is “hard to identify [as fraudulent] because they are written by trained scientists [to look like real science]. Accordingly, their apparent quality is often high enough to end up being published in scientific peer-reviewed journals . . . and thus become part of the permanent scientific record.”[44]

In recent years, as documented by Retraction Watch, thousands of suspicious articles have been flagged by researchers.[45] Among other things, these sleuths can use an automated “Problematic Paper Screener”[46] that searches for various common problems, including “tortured phrases” that may evidence the use of AI generation (e.g., using the phrase “‘counterfeit consciousness’ instead of ‘artificial intelligence’”).[47] The full extent of these paper mills (or gangs) is beyond the scope of this paper, but the key thing to understand for now is that there are just so many more articles being submitted that it’s perhaps unsurprising that a larger number (or even a larger proportion) would turn out to be wrong or even fraudulent, and that there might just not be enough good peer review to go around. It’s also further evidence that even “good” peer review can’t catch—and is not designed to catch—all bad science.[48]

 III. What It All Means For Litigation

A. Peer Review Makes a Difference, and Occasionally Courts Look at it Carefully

Given the range of activities that get labeled as “peer review,” it’s a fair question whether it should get much weight at all in evaluating the methodology underlying or conclusions of expert testimony. But courts do continue to consider it as an aspect of their gatekeeping task.[49] Indeed, Texas’s landmark articulation of the standards for scientific evidence, Merrell Dow Pharmaceuticals, Inc. v. Havner,[50] made it clear:

We do not hold that publication is a prerequisite for scientific reliability in every case, but courts must be ‘especially skeptical’ of scientific evidence that has not been published or subjected to peer review . . . . Publication and peer review allow an opportunity for the relevant scientific community to comment on findings and conclusions and to attempt to replicate the reported results using different populations and different study designs.[51]

While courts routinely note that peer review is not required, its absence continues to be found relevant.[52] Most courts at least nod at the limitations on peer review as described in the federal Reference Manual on Scientific Evidence:

Myth: The institution of peer review assures that all published papers are sound and dependable.

Fact: Peer review generally will catch something that is completely out of step with majority thinking at the time, but it is practically useless for catching outright fraud, and it is not very good at dealing with truly novel ideas. Peer review mostly assures that all papers follow the cur- rent paradigm . . . . It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.[53]

But that nod is almost always just saying that peer review is not dispositive, rather than unpacking it further. Courts don’t, in other words, routinely open up the box labeled as “peer review” to evaluate how much weight—if any—peer review of a particular paper should receive.

The occasional exceptions provide some indication about what might be getting missed. In an appeal of a National Vaccine Injury Compensation Program claim, the Chief Special Master of the Court of Federal Claims took a deep dive into the extensive expert record in McDonald v. Secretary of Health & Human Services.[54] The sole question was one of general causation—that is, whether the Gardasil HPV vaccine was proven capable of causing chronic fatigue and some related symptoms.[55] A rheumatologist, Dr. Arthur Brawer, provided four expert reports after examining the claimant.[56] He cited to a number of articles, many of which he co-authored, to support the notion that the vaccine in question could cause the type of injury alleged.[57] “Dr. Brawer posited that his own peer-reviewed publications filed in this case establish his general contention of the toxicity of silicone-containing vaccine ingredients.”[58] At the time of that statement, he did not identify that literature, but he eventually did do so, citing a number of pieces.[59] As just one example, he cited a paper in the International Journal of Vaccines and Immunization solely authored by him.[60] That journal is published by Sci Forschen, long listed as a potentially predatory publisher on Beall’s List,[61] and the article itself is listed as having been received on April 14, 2020, accepted less than two weeks later, and published five days after acceptance.[62] It’s not impossible, of course, for an article to be given full peer review and editing in that time frame, but it’s unlikely. One of the experts disagreeing with Brawer, indeed, argued that seven of the articles provided by Brawer came from predatory journals, and that Brawer’s first expert report had, in fact, been converted into an article published in one of those journals—providing a purportedly peer-reviewed and circular piece of support for the argument.[63] Ultimately, the Special Master did not expressly conclude that Brawer’s articles were published in predatory journals, but he did reject their reliability, noting that they “had questionable legitimacy as independent, reliable items published in accordance with peer-reviewed practices.”[64]

B. Discovery Into the Peer Review Process

In McDonald, the quality of the journal, was raised, as typically is the case, via an expert witness on the other side. Sometimes, parties seek discovery into the peer review process itself, for understandable reasons at least in certain situations. In high-stakes matters—say, large personal injury MDLs involving pharmaceuticals—general causation (can this medicine cause the alleged adverse event?) will often be hotly disputed, and the admissibility of expert testimony supporting or disputing that causation will be similarly disputed. If testimony supporting the presence of general causation is excluded, that’s usually the end of the road for a personal injury claim. Sometimes—perhaps most of the time—the literature on which the experts rely to support their opinions won’t be particularly controversial. But sometimes there will be literature expressly supporting or disputing causation—or sometimes, as was alleged with respect to Dr. Brawer, the publications will be generated by one of the experts, maybe to improve the odds of admissibility.[65] If a litigant thinks that the peer review was either weaker than usual, or entirely non-existent, the way to show that—to answer those factual questions—is via fact discovery. As suggested above, the world of scientific and medical literature today is different than even just a few decades ago, and while peer-reviewed publication has never been a guarantee of anything, it’s a much fuzzier notion now, and courts might need education on that point.

To date, most discovery into peer review (at least most of it that has ended up in public rulings) has been focused on well-regarded journals; indeed, of the three cases that explore the question the most, two involve the Journal of the American Medical Association (JAMA) and one the New England Journal of Medicine (NEJM)—two of the best-known journals in the country, if not the world. And in general, that discovery has not been about the existence or the quality of peer review, but instead about the statements made by reviewers in that process.

In the first of two relevant opinions coming from the Bextra and Celebrex MDL, In re Bextra and Celebrex Marketing Sales Practices and Product Liability Litigation, the court (applying Illinois law) addressed the defendant Pfizer’s effort to compel the production of peer review materials (but not the reviewers’ identities) from JAMA and the Archives of Internal Medicine related to submissions, whether published or not, related to its arthritis medicines.[66] The journals resisted Pfizer’s subpoenas, both contending that the discovery requests were overly burdensome.[67]

The Court concluded that the materials were insufficiently likely to lead to admissible evidence (the standard at issue). Pfizer had not argued that the materials they sought would affect the Court’s decision about the admissibility of any expert testimony, it seems:

Pfizer has argued simply that the subpoenaed documents are reasonably likely to lead to the discovery of admissible evidence in the product liability cases because they relate to “study results, hypotheses and biological explanations and causal relationships regarding Celebrex and Bextra; Pfizer’s involvement in scientific publications; and Pfizer’s responses to scientific publications.”[68]

In short, the Court concluded Pfizer had insufficiently explained how the peer review comments would advance the effort.[69] And, given the journals, there was no question that some form of peer review took place.

But the Court didn’t end there; it also described as “the bigger issue” the fact that producing the materials at issue would “require the Journals to produce documents and information that has historically, deliberately and necessarily been kept confidential.”[70] JAMA’s editor-in-chief, Catherine D. DeAngelis provided sworn testimony that producing the materials would result in a “severe decline in medical reviewers willing to accept additional requests to participate in peer review.”[71] The Court seemed to take her comments with a grain of salt (characterizing them as “quite dramatic”), but concluded that “compelling production of peer review documents would compromise the process.”[72] This conclusion is fine as far as it goes, though it’s at least questionable that reviewers would ignore JAMA’s editor’s emails. But it also should be seen as specific to its context: among the most revered journals, and in a situation where the purpose of the discovery, at least as characterized by the Court, was not about whether the peer review asserted existed or was done well or poorly, but was instead about what those reviewers said on their way to a publication recommendation. One can certainly understand why Pfizer was interested in the latter, but, put on a scale with Dr. DeAngelis’s litany of concerns, one can also see how the journal’s concerns would prevail.

In the second Bextra case, involving the New England Journal of Medicine and taking place in Massachusetts, a different court rejected similar discovery seeking the substance of peer reviewers comments, though for slightly different reasons.[73] The Court in that case concluded that the materials were relevant, but that production would not be compelled due both to a similar weighing exercise and, unlike in the prior case, under the First Circuit’s protection of journalists.[74] Echoing the other Bextra court’s balancing exercise, this court concluded that “[t]he batch or wholesale disclosure by the NEJM of the peer reviewer comments communicated to authors will be harmful to the NEJM’s ability to fulfill both its journalistic and scholarly missions, and by extension harmful to the medical and scientific communities, and to the public interest.”[75]

Thus, courts have shown at least a willingness to consider allowing discovery into the peer review process, even if they haven’t actually ordered it yet. They also, to emphasize, do not seem to be questioning the nature of peer review or—understandably given the journals involved here[76]—the quality of the review provided.

C. What to Do?

We are in a changed situation: There are articles and journals that are both greater in number and more varied in quality than before, and a range of behavior labeled as peer review that goes from the traditional quality to literally nothing. There are also situations where litigation experts (or would-be litigation experts) publish research supportive of their testimony, whether as part of their usual academic work or otherwise. And it remains the case in litigation that relying on a putatively peer-reviewed article still means something to judges. Judges furthermore think it appropriate to give the peer review process some level of protection against discovery, at least when that discovery is related to the substance of the commentary rather than the process itself.

Despite the changes, what we haven’t seen, at least not regularly or in published cases, is litigants and courts grappling with that new world of medical and scientific literature.[77] This is a very different world than the one assumed when the Daubert court (and the Rules Committee and many other courts to follow) called out peer review as a persuasive, if not dispositive, factor in favor of admissibility. It’s also possible, given both the vastly larger number of places for a publication to land and the growth in certain categories of litigation (especially mass tort products liability cases), that more expert witnesses could be seeking to publish (whether in predatory journals or otherwise) articles supportive of their views to bootstrap their opinions’ admissibility.

The tools already exist to address those situations where a matter might involve shenanigans—a piece of reliance literature published in a predatory journal by a testifying expert; a similar piece of material published by someone else in a journal that had the lightest of touches in terms of peer review. On the former, interrogatories would likely demonstrate that the journal is simply pretending to be something it is not, assuming the entity can be served. (If not, the inability to actually make contact could provide considerable evidence about its quality.) On the latter, even if a court wished to protect the identity and substance of the review comments provided in the first instance, an initial log showing the number of reviewers, their general qualifications, and some objective information about the review—how long it took, if they were in fact anonymous, how extensive the comments were, and the like—could be an initial round of information that would support additional discovery. Moreover, if the person seeking the discovery could show that their target was information about the review process, rather than about the substance, in order to make more or less likely the reliability of the methodology articulated in the literature, that should mitigate the understandable concerns described by courts facing the few tries previously made to take such discovery.

Of course, that requires someone to suspect shenanigans in the first place. Lawyers, especially in high-stakes matters where the incentives are substantial enough that publishing relevant literature (whether in legitimate or illegitimate journals) would make sense, should recognize the signs of iffy scholarship and iffy journals, and consider poking around at the question, whether when deposing experts or by written discovery. That’s the only way that judges will get educated about the full scope of what today’s scientific and medical literature entails.

Conclusion

What counts as “peer reviewed” is even more varied than it was twenty years ago, and the likelihood of it being a particularly strong proxy for reliability has dropped accordingly. Some of that is due to outright fraud (e.g., predatory journals and paper mills), and some of it is due to a general decline in the quality of review done by not-outright-fraudulent journals, for a variety of reasons, including the sheer volume of articles making genuine deep review impossible. Litigants should be more cognizant of the possibility of articles in predatory journals, and what looks like, and pursue discovery when appropriate. Courts, in turn, should be open to that discovery—and at least, even if the litigants don’t do a good job of exploring what’s behind that curtain, to be less automatically deferential to peer-reviewed literature when offered in support of an expert’s opinion.

  1. See Memorandum from Patrick J. Schiltz, Chair, Advisory Comm. on Evidence Rules, to John D. Bates, Chair, Standing Comm. on Rules of Prac. & Proc. of the Jud. Conference of the U.S. (May 15, 2022), https://www.uscourts.gov/sites/default/files/evidence_rules_report_-_may_2022_0.pdf.
  2. Richard Van Noordan, More Than 10,000 Research Papers Were Retracted in 2023—A New Record, 624 Nature 479, 479 (2023).
  3. In the interest of plugging a piece I’d like more people to read, though, I’ll point out that you can see how a federal court in the first half of the prior century dealt with proffered opinion testimony from laypeople in the context of a medical quack who, among other things, implanted goat testicles into men as an impotence treatment. William Gordon Childs, Goat Testicles, Scientific Evidence, and Consequences: Stopping a Killing Spree with Nothing but Evidence Law, 42 U. Ark. Little Rock L. Rev. 147 (2019).
  4. 509 U.S. 579, 585–89 (1993).
  5. See id. at 589.
  6. See id. at 592–93.
  7. Id. at 593. (“Ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact will be whether it can be (and has been) tested.”).
  8. Id. at 594. (“The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”) Note that here, I’m neither discussing the peer review used in many medical institutions to review physicians’ work nor the peer review used to evaluate academics for tenure decisions. Many decisions address the potential privilege for those materials.
  9. Id. (“[I]n the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of error . . . .”).
  10. Id. (“Widespread acceptance can be an important factor in ruling particular evidence admissible, and ‘a known technique which has been able to attract only minimal support within the community’ . . . may properly be viewed with skepticism.”) (quoting United States v. Downing, 753 F.2d 1224, 1238 (3d Cir. 1985))).
  11. See id. at 592–94.
  12. Id. at 594.
  13. Fed. R. Evid. 702 advisory committee’s note to 2000 amendment.
  14. Id. advisory committee’s note to 2023 amendment.
  15. Id.
  16. Id.
  17. An old but still mostly accurate (and more detailed) look at peer review can be found in William G. Childs, The Overlapping Magisteria of Law and Science: When Litigation and Science Collide, 85 Neb. L. Rev. 643 (2007) [hereinafter Magisteria].
  18. See Effie J. Chan, Note, The “Brave New World” of Daubert: True Peer Review, Editorial Peer Review, and Scientific Validity, 70 N.Y.U. L. Rev. 100, 116 (1995).
  19. See Childs, supra note 17, at 655.
  20. See Alfred Yankauer, Who Are the Peer Reviewers and How Much Do They Review?, 263 J. Am. Med. Assoc. 1338, 1339 (1990); Stephen Lock & Jane Smith, What do Peer Reviewers Do?, 263 J. Am. Med. Assoc. 1341, 1342 (1990).
  21. See How to Write a Report, Nature, https://www.nature.com/nature/for-referees/how-to-write-a-report (last visited Sept. 2, 2024).
  22. See Tony Ross-Hellauer, What Is Open Peer Review? A Systematic Review, F1000Research (Aug. 31, 2017), https://f1000research.com/articles/6-588 (proposing “a pragmatic definition of OPR [open peer review] as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process”).
  23. William L. Anderson, et al., Daubert’s Backwash: Litigation-Generated Science, 34 U. Mich. J.L. Reform 619, 634 (2001) [hereinafter “Daubert’s Backwash].
  24. Donald A.B. Lindberg, Internet Access to the National Library of Medicine, 3 Effective Clinical Prac. 256, 256 (2000).
  25. Esther Baldinger, List of Journals Indexed in Index Medicus and List of Serials Indexed for Online Users, NLM Technical Bulletin (Oct. 27, 2024, 1:04 AM), https://www.nlm.nih.gov/pubs/techbull/mj00/mj00_pubs2000.html.
  26. MEDLINE, PubMed, and PMC (PubMed Central): How Are They Different?, National Library of Medicine (last visited Oct. 27, 2024, 1:13 AM), https://www.nlm.nih.gov/bsd/difference.html.
  27. Pub Med Journal List, National Institutes of Health (last visited Oct. 27, 2024, 1:34 AM), https://ftp.ncbi.nih.gov/pubmed/J_Medline.txt.This proliferation is probably multifaceted, but at least part of it is that the cost of entry has plummeted, since many journals are digital only. But, as discussed further below, the number of submissions seems to be keeping up, at least some.
  28. Urology & Nephrology Open Access Journal, MedCrave (last visited Nov. 14, 2024), https://medcraveonline.com/UNOAJ/
  29. See infra text accompanying note 38.
  30. Jeffrey Beall, Predatory Publishers Are Corrupting Open Access, 489 Nature 179, 179 (2012). Beall used to publish a list of journals he suspected of being predatory; after threats of litigation, he ceased maintaining it. It is worth noting that it is difficult to draw a precise line between a journal that is merely a “light touch” and a journal that is outright fraudulent, and indeed some publishers have become litigious about the question. See, e.g., Journal Retracts Letter to the Editor About Predatory Journals for “Legal Concerns”, Retraction Watch (last visited Nov. 2, 2024, 12:44 PM), https://retractionwatch.com/2024/07/12/journal-retracts-letter-to-the-editor-about-predatory-journals-for-legal-concerns/.The fact of predatory journals existing should not be taken as a criticism of all open access journals. My view, after spending much of my career involved in epidemiological matters, is that, like more traditional journals, some of them are terrific and some are weak—and then there’s the batch that borrow the trappings of open access journals to defraud. As discussed infra, the variability in the quality of peer review in all journals is why I think courts and litigants should be open to discovery into the peer review and publishing process when relevant.
  31. See Predator (film), Wikipedia (last visited Oct. 27, 2024, 2:14 AM), https://en.wikipedia.org/wiki/Predator_(film)#Sequels_and_ franchise. There are three direct sequels and one prequel, plus two crossovers with the Alien franchise. Some of them are probably better than you remember.
  32. Beall, supra note 30 at 179.
  33. Susan A. Elmore & Eleanor H. Weston., Predatory Journals: What They Are and How to Avoid Them, 48 Toxicologic Pathology 607, 607 (2020).
  34. David Mazières, Get Me Off Your Fucking Mailing List, Stanford Computer Systems Group, http://www.scs.stanford.edu/~dm/home/papers/remove.pdf. As a word of warning, it contains the f-word 863 times, possibly not counting the times in the two graphs included. The capitalization in the text above is as in the original.
  35. See Joseph Stromberg, “Get Me Off Your Fucking Mailing List” is an Actual Science Paper Accepted by a Journal, Vox (Nov. 21, 2014), https://www.vox.com/2014/11/21/7259207/scientific-paper-scam. The actual creators of the paper were not who submitted it for publication; instead, an Australian computer scientist did so in response to spam from the journal and was surprised when it was promptly accepted for publication. See id.
  36. See International Journal of Advanced Computer Science and Technology, https://www.ripublication.com/ijacst.htm (last visited Nov. 2, 2024).
  37. John Bohannon, Who’s Afraid of Peer Review?, 342 Science 60, 60 (2013). Bohannon authored the paper and submitted it to 304 open-access journals, over half of which accepted it. Id.
  38. John H. McCool, Opinion: Why I Published in a Predatory Journal, The Scientist (June 2017), available at https://www.the-scientist.com/opinion-why-i-published-in-a-predatory-journal-31697. More examples are probably not necessary, but I am also fond of the 2020 publication of an article based on Breaking Bad. See Bradley C. Allf, Experiential Learning in Secondary Education Chemistry Courses: A Significant Life Experiences Framework, 10 US-China Education Review A. 158, 159 (2020), https://predatory-publishing.com/wp-content/uploads/2023/02/144-Breaking-Bad.pdf.The Seinfeld-based article caught my attention when it came out and, a couple of months later, I was scheduled to take the deposition of an expert witness. This witness had wide-ranging testimony, but one aspect of it involved his evaluation of the severity of the plaintiff’s condition, and as support for his view of the matter, he cited an article that, surprise, he published himself in another MedCrave journal, leading to what is probably my favorite colloquy in any deposition I’ve taken:

    Q. Are you familiar with the television show Seinfeld?

    A. Vaguely. I’ve watched one or two episodes maybe.

    Q. You’re aware that it’s a fictional television show?

    A. Yeah.

    Transcript on file with author.

  39. See Potential Predatory Scholarly Open-Access Publishers, Beall’s List of Potential Predatory Journals and Publishers, https://beallslist.net.
  40. Richard Van Noordan, More Than 10,000 Research Papers Were Retracted in 2023—A New Record, 624 Nature 479, 479 (2023).
  41. That is, nerds.
  42. The Retraction Watch Database, http://retractiondatabase.org/ RetractionSearch.aspx (follow “Retractions” hyperlink; then enter the desired date range in the retractions section of the home page, and hit search). So far, 2024 looks like it may be a smaller number than recent years. Id.
  43. Sabel, B.A., Seifert, R. How Criminal Science Publishing Gangs Damage the Genesis of Knowledge and Technology—a Call to Action to Restore Trust, 394 Naunyn-Schmiedeberg’s Archives of Pharmacology 2147, 2147–48 (2021).
  44. Id.
  45. See, e.g., Sleuths Spur Cleanup at Journal With Nearly 140 Retractions and Counting, Retraction Watch, https://retractionwatch.com/2024/08/22/sleuths-spur-cleanup-at-journal-with-nearly-140-retractions-and-counting/.
  46. See Problematic Paper Screener, https://www.irit.fr/~Guillaume.Cabanac/problematic-paper-screener.
  47. Guillame Cabanac et al., Tortured phrases: A Dubious Writing Style Emerging in Science—Evidence of Critical Issues Affecting Established Journals 2 (Jul. 12, 2021) (working paper), https://arxiv.org/pdf/2107.06751.
  48. Recall that Andrew Wakefield’s since-retracted article in The Lancet asserting a connection between the MMR vaccine and autism was at least arguably litigation driven (he was retained as an expert witness and did not disclose it to the publication) and peer-reviewed in a highly respected journal. Later work suggests that not only was the work flawed, but fraudulent. Peer review is not designed to catch such conduct. See Brian Deer, How the Case Against the MMR Vaccine was Fixed, 342 BMJ 77, 77–78 (2011).
  49. See, e.g., Taber v. Roush, 316 S.W.3d 139, 156 (Tex. App.—Houston [14th Dist.] 2010, no pet.) (noting experts’ reliance upon peer-reviewed journals in reaching their conclusions); Hill v. Mills, 26 So.3d 322, 332 (Miss. 2010) (noting that, while there is “no per se requirement that an expert’s opinion be supported by peer-reviewed articles,” “peer-reviewed literature is helpful when presented”).
  50. 953 S.W.2d 706 (Tex. 1997).
  51. Id. at 727 (quoting Brock v. Merrell Dow Pharms. Inc., 874 F.2d 307, 313 (5th Cir. 1989), as modified on reh’g 884 F.2d 166 (5th Cir. 1989)).
  52. See, e.g., Dupuy v. Am. Ecology Env’t Servs. Corp., No. 12-01-00160-CV, 2002 WL 1021342, at *5 (Tex. App.Tyler May 14, 2002, no pet.) (noting that “[o]ur review also reveals that Dr. Thomas does not state any facts in his affidavit that address whether his theory has been published or subjected to peer review” and that “[p]ublication and other peer review is a significant indicator of the reliability of scientific testimony and ‘increases the likelihood that substantive flaws in methodology will be detected.’”) (quoting Havner, 953 S.W.2d at 727); Nabors Well Services, Ltd. V. Romero, 508 S.W.3d 512, 537 (Tex. App.El Paso 2016, pet. denied) (“Courts should be skeptical of scientific evidence which is neither published nor peer reviewed.”) (citing Havner, 953 S.W.2d at 727).
  53. David Goodstein, How Science Works, in Reference Manual on Scientific Evidence 48 (National Academy of Sciences & Federal Judicial Center, eds., 3d. ed. 2011).
  54. No. 15-612V, 2023 WL 2387844 (Fed. Cl. Mar. 7, 2023)
  55. Id. at *1. The National Vaccine Injury Compensation Program is a no-fault compensation program for injuries that are shown to be the result of a vaccine, and so causation is generally the central question. Id. at *17–18.
  56. Id. at *4.
  57. Id. at *7–8.
  58. Id. at *7.
  59. Id.
  60. Arthur E. Brawer, Vaccination Induced Diseases and Their Relationship to Neurologic Fatiguing Syndromes, Channelopathies, Breast Implant Illness, and Autoimmunity via Molecular Mimicry, 4 Int’l J. Vaccine & Immunization 1 (2020).
  61. See Jeffrey Beall, Beall’s List of Potential Predatory Journals and Publishers, Beall’s List, https://beallslist.net.
  62. Brawer, supra note 60 at 1.
  63. McDonald v. Sec’y of Health & Hum. Servs., No. 15-612V, 2023 WL 2387844, at *11 (Fed. Cl. Mar. 7, 2023).
  64. Id. at *23. A serial expert witness, the late Dr. David Egilman, once had his testimony alleging that bronchiolitis obliterans was caused by diacetyl, a food additive, excluded based in part on the lack of peer review of the causation hypothesis he presented. See Newkirk v. ConAgra Foods, Inc., 727 F. Supp. 2d 1006, 1019–20, 1028–29 (E.D. Wash. 2010). Dr. Egilman then published—in the International Journal of Occupational and Environmental Health, a journal at which he was the Editor in Chief—an article addressing the potential causal link (disclosing his role in the litigation). See David Egilman, A Proposal for Safe Exposure Level for Diacetyl, 17 Int. J. Occupational Env’t. Health 122 (2011). The article, unusually, expressly thanked the peer reviewers, perhaps wanting to emphasize the fact of that review. See id.
  65. I wrote at more length about this litigation-driven scholarship phenomenon in Magisteria. Childs, supra note 17.
  66. In re Bextra and Celebrex Marketing Sales Practices and Product Liability Litigation, No. 08-C-402, 2008 WL 4345158, at*2–3 (N.D. Ill. March 14, 2008) [hereinafter Bextra 1].
  67. See id. at *2–3. The journals also contended that production was precluded by the Illinois Reporter’s Privilege and the Illinois Medical Studies Act privilege. Id. at *3. The court rejected the privileges, concluding that the materials sought, as to the former, would not reflect a source’s identity and, as to the latter, relates to peer review in the sense of professional self-evaluation. See id. at *4. Many hospitals have “peer review panels” that look at their professionals’ conduct, and Illinois’s statute (like others) is aimed at preventing that process from becoming part of litigation. See id.
  68. Id. at *2.
  69. See id.; see also Childs, supra note 17, at 662–63 (exploring litigants’ discovery effort into the peer review process involving an academic book).
  70. Bextra 1, 2008 WL 4345158, at*3.
  71. Id.
  72. Id.
  73. In re Bextra and Celebrex Marketing Sales Practices and Product Liability Litigation, 249 F.R.D. 8 (D. Mass. 2008) [hereinafter Bextra 2].
  74. See id. at *13–14.
  75. Id. at 14. In a recent case, a plaintiff in a personal injury case regarding the medicine Zantac sought the peer review materials related to an article submitted to JAMA. The factual scenario is a bit complicated, but essentially, the article was submitted, the journal initially said it could become publishable with changes, and then, after changes were made, the journal decided it would not be published after all in that form, because it used an older testing method. A revision of the article, using an updated testing method, was published, but also did not reach as strong a conclusion about the medicine potentially being carcinogenic as the prior version. The plaintiff sought discovery into the peer review process to find out why JAMA changed its mind. The Appellate Court of Illinois concluded, arguably contrary to Bextra 1, that the Illinois statutory peer review privilege did in fact apply and that the plaintiff had not overcome that privilege. As with the Bextra cases, this discovery was not related to how good or bad (or existent!) the peer review was. See generally Gibbons v. GlaxoSmithKline, LLC, 239 N.E.3d 10 (Ill. App. 2023).
  76. Of course, no journals are immune from publishing erroneous articles, and I do not intend to suggest that any journal’s peer review process can prevent all errors—not every retraction means that peer review has failed, and no journal’s peer review is impervious to error (or a determined bad actor). Retraction Watch’s Database lists 25 retractions tied just to JAMA’s flagship journal and 39 tied to the New England Journal of Medicine. See The Retraction Watch Database, http://retractiondatabase.org/RetractionSearch.aspx (follow “Retractions” hyperlink; then enter “JAMA: Journal of the American Medical Association” or “NEJM: The New England Journal of Medicine” and hit search).
  77. Notably, Westlaw does not list a single reported opinion using the phrase “predatory journal.” Search performed Sept. 16, 2024. It lists only one brief doing so. That brief argues against the admissibility of an expert’s opinion based, in part, on the fact that it relied upon publications in a predatory journal. See Brief of Appellees, Wu v. Lumber Liquidators, Inc., No. 14-20-00765-CV, 2021 WL 3130962, at *58 (Tex. App.—Houston [14th District] July 9, 2021). The opinion that followed (affirming the exclusion of the testimony) did not reference that topic specifically. See Wu v. Lumber Liquidators, Inc., 2024 WL 3160554 (Tex. App.—Houston [14th District] 2024, no pet.).