A while back, I ran and reported the results of an informal survey on what readers think works well and not so well in peer-review. Although the results should be taken with a grain of salt, a number of trends emerged. One was that a vast majority (93.67%) of respondents agreed that 'journal turnaround times are too long and inconsistent.' In a follow-up post, I presented a possible proposal for how to address that issue: a central editorial system (perhaps at Philpapers) for journal editors to use that would give each reviewer a reviewer score, based at least in part on how quickly the reviewer responds to review requests and gets their reviews in. Whenever an editor gets a new paper, the system would then present the editor with the best reviewer matches–the final wrinkle being that the system would be designed to match papers with reviewers whose overall reviewer score is similar to the author's reviewer score. In other words, the system would match authors who are slow reviewers with other slow-reviewers, and authors who are fast reviewers with other fast reviewers.
Although this is obviously a fanciful system, at least at the present time (given that there is no such central system), it seems to me one that might be technologically feasible and work to everyone's advantage: authors, reviewers, and editors (particularly editors, as such a system would vastly simplify their task in finding reviewers!). By matching authors with 'reviewers like them', the system would plausibly incentivize many people to be better, faster reviewers–as there would be some incentive to be a better reviewer (namely, the incentive of getting matched with better reviewers as an author!). I also wonder whether this system might address a second set of issues that respondents in our poll cared about: quality of referee reports. As you can see below, many respondents in our poll thought that too many reviewer reports are incompetently, biased or needlessly aggressive:
Similarly (although people were more split on this one), many respondents also felt that too many referee reports are perfunctory:
Finally, there was a pretty strong trend of people thinking that reviewers tend to be too conservative in what they judge to be worthy of publication:
How might peer-review be reformed to improve referee behavior? One possibility is that, every time a reviewer submits a review, the editor and/or submitting author could fill out a brief e-survey rating whether the review was biased, hostile, or incompetent; whether it was too perfunctory; and whether the reviewer was too conservative in their standards for publication. Because there's a fairly clear conflict of interest in letting authors take part, perhaps this could be restricted to editors–so that in the aforementioned central editorial system, each reviewer would have a "quality of review" average that (in addition to quickness in turnaround) that would contribute to how the system matches new papers with reviewers.
But again, this is (as of now) just a fanciful solution. Which is why I would like to pose the question to you all: how do you think peer-review could be best reformed to ensure better quality of reviews? What could editors/journals do to help ensure less bias, incompetence, and hostility in review reports, fewer perfunctory reports, and reviewers using non-overly-conservative standards in evaluating what is worth of publication? Any ideas?



Leave a Reply