Is peer review on its way out?

Andreas Ortmann
10 min readJan 26, 2023

--

I’m not convinced. Here’s why.

A few weeks ago, some guy — a post-doc in social psychology and aspiring comic — posted on his substack page a piece on the rise and fall of peer review. And why that is a great thing.

After a brief history of peer review, he concluded:

Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.

The results are in. It failed.

He gave good reasons such as that the current system is costly; that it can take months or years for a paper to get through the system; that commercial publishers milk the system; that the system is susceptible to fraud, and that peer review does not manage to eradicate fraud. All these are well-known facts (although substack-dude would have benefitted from quantifying his claims rather than arguing by sensational anecdote, and he would benefit from reading up on retraction watch.)

In fact, we’ve got knock-down, real-world data that peer review doesn’t work: fraudulent papers get published all the time. If reviewers were doing their job, we’d hear lots of stories like “Professor Cornelius von Fraud was fired today after trying to submit a fake paper to a scientific journal.” But we never hear stories like that. Instead, pretty much every story about fraud begins with the paper passing review and being published.

The fact that some people manage to push the limits of questionable research practices, is about as surprising as the fact that some people have stronger preferences for truth-telling than others, or that false negatives and false positives are things. Which are probably concepts that our precious post-doc has yet to come across.

Not that I have much hope. At least not for substack-dude.

Recently, a friend asked me when I last read a paper from beginning to end; I couldn’t remember, and neither could he.

Peer review, IMHO, is similar to democracy. It is the worst of all academic quality assurance mechanisms, except for all others.

What’s substack-dude’s best alternative?

Well, last month I published a paper, by which I mean I uploaded a PDF to the internet. I wrote it in normal language so anyone could understand it. I held nothing back — I even admitted that I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not to. I uploaded all the materials, data, and code where everybody could see them. I figured I’d look like a total dummy and nobody would pay any attention, but at least I was having fun and doing what I thought was right.

Then, before I even told anyone about the paper, thousands of people found it, commented on it, and retweeted it.

And it got better:

Total strangers emailed me thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The paper now has more views than the last peer-reviewed paper I published, which was in the prestigious Proceedings of the National Academy of Sciences. And I have a hunch far more people read this new paper all the way to the end, because the final few paragraphs got a lot of comments in particular. So I dunno, I guess that seems like a good way of doing it?

Well, maybe not. This kind of ex-post peer review has in fact been tried before. By Nature. In 2006. Didn’t last long though. Despite an average of 5,600 page views a week, “this reader interest did not convert into significant numbers of comments,” 92 comments were made over four months. Then that experiment was over. Finito. Kaputt.

Most likely that was because reviewing a paper in most social sciences — except maybe social psychology — is not comparable to, and as effortless as, reviewing, say, a hotel room or a restaurant on TripAdvisor, or some product on Amazon or e-bay. Or for that matter posting opinionated pieces on a substack page.

I do note parenthetically that posting pdf files of papers on the internet is already done by most academics prior to, during, and after the peer review process, so benefits from this strategy are already accruing. Admittedly there are some important disciplinary differences in the use of this strategy. While econs use it all the time, psychologists seem to do it less although that seems to change now, slowly, too.

In any case, our substack-dude’s little provocation did get some traction. (Not quite as much as Andrew Tate gets on his twitter account but that is a story for another day.)

In a follow-up substack piece a couple of weeks later, titled in character The dance of the naked emperors, he dismissed various objections to his earlier missive. Because, well, to our substack-dude his critics are all naked emperors.

One of his correspondents made the obvious point that if everyone were to follow suit, “there would be 100s of manuscripts uploaded weekly with zero quality control and zero discoverability unless like a self-publishing fiction author you work your ass off at social media to get noticed.”

To which our brilliant mind answered:

I understand this fear, but I think it’s got the wrong model of the world.

Imagine that the Nobel Committee decides to stop picking Nobel Prize winners. “Everybody can print out their own Nobel Prize certificate at home!” they announce.

It would be silly to worry that people would spend all day printing out Nobel Prizes — there’s no reason to, because they’re now worthless. …

I think it would work the same in publishing papers. Right now you get credit for each paper you publish in a journal (with more credit for more prestigious journals), so you want to publish as many as you can. But if “publishing” is just “uploading a PDF to the internet,” you get no credit for the act of publishing itself, so publishing lots of papers just for the sake of publishing them would only make you look dumb.

Nevermind that fear can’t have a wrong model. The problem is, unfortunately, not just a supply but also a demand side problem. Let’s assume, counterfactually because our post-doc in social psychology apparently doesn’t, that he would actually try to stay on top of the literature. That turns out to be difficult even if he were interested only in a very small area of interest.

Let me illustrate my claim.

Since mid-last year I have been one of five co-editors of the leading field journal in experimental economics, brilliantly named Experimental Economics. In the first six months, I handled about 30 manuscripts of which close to 60 percent — within the margin of error of the rates of my co-editor colleagues — I ended up desk-rejecting after an initial reading, and typically after consultation with another co-editor who also read it. Is there a chance that among those that we desk-rejected there was a must-read? Maybe, but I am sure the odds are very, very low. The remaining manuscripts I sent out to typically two referees, and since then have invited a couple of the papers for (major) revisions, have rejected a couple based on referee reports, and am waiting for the referee reports on the remaining ones to come in. Our current acceptance rate is close to 10 percent, so on average for 30 submitted manuscripts we end up with 3 published. In other words, we are the gate-keepers for about 90 out of 100 manuscripts, really cutting down on what you ought to consider reading in our small corner of the internet. Readers of Experimental Economics are likely to know our acceptance rate, the editors who make the decisions, and our robust editorial procedures. So hopefully that gives them some confidence that an article is worth reading.

I rarely read one of the many working papers that come my way in an unsolicited manner (here is an exception which however came with a recommendation by someone I trust). The flood of such papers makes it impossible to separate the wheat from the chaff, and so I am as grateful to volunteer editors and referees for doing some pre-sorting for me, as I am grateful for the good folks that do the sorting on TripAdvisor and other similar feedback mechanisms. (I almost never sample a new hotel or restaurant without having read the ratings summary and at least half a dozen comments.)

I had a discussion with some folks on my Facebook page about substack-dude’s piece. The following additional points were made.

First, the proposed solution of uploading pdfs to the internet (and abandoning peer review as we know it) would advantage established authors. Name recognition, or reputations, do matter. And to assert yourself in a crowded field is harder for un-known authors than those with a track record. A cool paper on this topic here.

Second, the proposed solution would lead to more flashy, and less substantial, manuscripts. To distinguish yourself from the crowd you have to make ever more outrageous claims and statements. As Jaroslav Borovicka says:

The article is simply bu**shit. It does not build any counterfactual. It does not produce any systematic evidence. It uses selected examples to judge the population. It uses strawmen throughout. There is plenty to worry about in the peer review process but this article is just making bombastic statements to seek attention.

I cannot see how seeking attention could possibly lead to sounder science, or a better discourse. Check twitter for numerous illustrations.

Third, one of the authors commenting on the Facebook Psychological Methods discussion group on substack-dude’s piece wrote:

Catherine Caldwell-Harris

Why I appreciate peer review

In 30 years of being a psychology professor and researcher every one of my papers has benefited from peer review. Reviewers notice angles that I didn’t, catch mistakes, make me add citations, make me clarify material. In my first decades of being a scientist, reviewers taught me how to conduct science, taught me statistics, taught me new methods, taught me how to make better interpretations. You might say, why didn’t grad school teach me that — it did, but there is a lot to learn. Scientists need constant teaching and advice because our research worlds are constantly evolving.

But beyond suggesting different stats and methods, above all, reviewers exposed me to articles that I had not known about that were high quality and relevant. Reviewers listed articles that somehow, despite Google scholar and alerts and me constantly searching, reading widely, attending conferences — despite my constant scientific engagement, that somehow, I still missed and omitted key valuable papers pertinent to my own research.

To give back to the scientific community I review a lot. I review on a wide variety of topics, including language processing, bilingualism, emotion, cultural psychology, immigration, autism, OCD, religion, child development, parenting, human nature, well-being (you can check my publons record). I review for journals of vastly varying quality. I have reviewed for PNAS, Psychological Science, Psychology Review. I’ve reviewed for every top cognition journal, every top psycholinguistics journal, many linguistics journals, of course innumerable conferences, NSF, NIMH, diverse granting agencies around the world — and I’ve reviewed for Frontiers and the mdpi publishing platform. I even reviewed a couple times for Emerald.

I try to write the type of review I would want to read. Sometimes my reviews are hard-hitting and could be seen as unwelcome from the authors but in those cases, work was too preliminary to submit to a journal. It needed more drafts, it needed to be circulated to the authors’ own network of colleagues for feedback. Much of the time I’m a cheer leader for the work and end up being the more positive reviewer of three who reviewed.

Would I want to simply upload my manuscript to a repository? Hm … I’m reluctant to say yes, from observing across many decades how much my own papers benefit from anonymous peer review. In one case I wrote an article which was somewhat outside my main areas of expertise. Before journal submission, I sent it to a half dozen or more authors on whose work my own heavily drew. Most of them got back to me. Some gave me detailed, useful comments that improved the paper prior to submission. But most of the authors mainly focused on how I described their own research (I’m still grateful for that). This “experiment” in trying to elicit feedback outside the official peer review system was helpful, but I still benefited from the journal’s peer review and the action editor’s insightful direction. You can see the published version of that article here: https://www.researchgate.net/.../356691691_Frequency...

Readers of Mastroianni’s blog post may read my comments and infer something is abnormal about me — that I am a poor scientist and thus I need anonymous peer review to improve my science. No. I’m probably typical. At least, in the diversity of articles I review, I see manuscripts that authors desire to publish, and I see how much improvement those articles need. They need a lot.

If, as I maintain, anonymous peer review helps improve the quality of manuscripts in the ways I’ve described from my own experience, how can this valuable service be maintained rather than discarded?

I agree with the substance of this comment.

The same author also noted:

Adam Mastroianni [substack-dude, AO] received bad mentoring as an undergraduate and is promulgating that now to others.

Regarding: “This was one of the first things I learned as a young psychologist, when my undergrad advisor explained there is a “big stochastic element” in publishing (translation: “it’s random, dude”). If the first journal didn’t work out, we’d try the next one. Publishing is like winning the lottery, she told me, and the way to win is to keep stuffing the box with tickets. When very serious and successful scientists proclaim that your supposed system of scientific fact-checking is no better than chance, that’s pretty dismal.” That is advice is astonishing and wrong. That professor was wrong. I’m sorry you got exposed to that as a young scholar. The correct advice is: you have an obligation to attend to reviewers’ valuable criticisms and revise your ms before resubmission. It is ethically wrong to disregard a review and submit without revision. That is what I was taught: in recognition of reviewers’ unpaid and mostly unrecognized effort to the profession, we authors have the obligation of not treating the review system the way your professor said we should.

But what if the reviewers’ comments were all crap? Is a researcher then allowed to ignore the advice and resubmit. In principle yes. But I have never had that experience even once. Is there something very weird about my experience that I have never had a review which contained no useful advice?

I agree with the substance of this comment, too.

Peer review is a noisy process and I can think of examples of stupid reviewer, and editor, comments. But they are rare. Quite rare indeed. While it is in need of improvement along many dimensions, I see no viable alternative and therefore predict that peer review will stay with us for a while. Like it or not.

Consider making your opinion known here by applauding, or commenting, or following me here, or on Facebook.

--

--

Andreas Ortmann

EconProf: I post occasionally on whatever tickles my fancy: Science, evidence production, the Econ tribe, Oz politics, etc. Y’all r entitled to my opinions …