In most recent "how can we help you?" thread, a reader asks:

Do people think AI is going to undermine the peer-review process? It's only a matter of time, I think, before AI reliably recognizes people's writing. I suppose journals could always run reviews through an AI editor to anonymize them, but I don't like the idea of running manuscripts themselves through them.

Good question. I imagine there are multiple ways that AI could ruin peer-review, this just being one of them. 

What do readers think? And what, if anything, can be done to prevent/mitigate it?

Posted in

8 responses to “Will AI ruin the peer-review process?”

  1. Tenured now

    I mean, it’s pretty easy to find out who wrote a lot of papers anyway, if you just google the title – it will likely show up on someone’s website or a conference program. I guess I assumed we were just all trying to operate in good faith already?

  2. The peer-review process is built on trust. Reviewers can already often find out who the author of a submission is if they are weak-willed or irresponsible (e.g. by using a search engine). I know some people have done this on occasion, but I take it that it is the exception rather than the rule. I trust that most reviewers never do this, and think they for similar reasons will never use AI to try to figure out who the author of a paper is (why would anyone bother?). Perhaps I am being naive though? (Also, knowing the identity of the author doesn’t necessarily undermine the peer review process. It depends.)

  3. Michel

    Not in that way, because, as the others said above, that still requires the reviewer to actively seek out that information, and I think we can broadly trust one another not to do that.
    I’m less sure we can broadly trust one another not to use AI to generate referee reports. And if many of us start doing that, then yes, that will ruin peer review.

  4. Ref

    I agree with the others … as long as we resist going to AI to generate our referee reports. Further, double and triple-blind refereeing is a bit overrated. I publish in two other fields besides philosophy. And in one of them papers are just single-blind reviewed. I do not think people’s behavior is any worse there than in philosophy (perhaps better).

  5. philai dabbler

    While you could probably train a transformer model to recognise author fingerprints with the right training set (and this is something that classical NLP methods have often aimed at), it is not something that LLMs can do well out of the box for texts not in their training set (and even there, you can expect high rates of hallucination). Most likely, if you attempt this, you will get a big name in the field (a name that the model is likely to ‘remember’) who wrote on related topics, and some post hoc hallucinated reasons why the style matches their work (which may or may not correspond to actual stylistic features of their work, which are also things that models struggle to ‘remember’ given how little they’re talked about).

  6. Paul Allen

    An undergrad professor of mine working in ancient Greek Phil once told me they purposely refashioned their paper to put large amounts of the argument in multiple page-long footnotes; all in the attempt to trick reviewers into thinking the paper was written by Gregory Vlastos (and plausibly review it more favorably). The point of this anecdote is that if (some) people are already manipulating their style to trick reviewers into thinking the author is someone prestigious, the automaticity of AI will only exacerbate the amount, and perhaps degree, of style manipulation by authors. Though I’m apprehensive (perhaps naively) that many people do intentionally manipulate their style in this way. But, as “Tenured now” said above, there are usually much easier ways to find the author’s identity if the reviewer is motivated enough.

  7. Cap

    I understand the temptation and don’t judge people for having it, but I would never ever try to find out whose paper I’m reviewing and I am confident that my close colleagues wouldn’t either (confident since they’re better people than I am).

  8. sahpa

    To be honest, I find it a little adorably naive that someone’s first thought about how AI might ruin the peer-review process by de-anonymizing papers/referee reports.
    It seems obvious to me that AI is going to ruin peer review by producing all the referee reports. Nobody wants to review, nobody has time to review, the number of reviews keeps growing. The solution is almost irresistible.

Leave a Reply to MichelCancel reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading