In the comments section of her recent post at NewAPPS on data she collected on journal submissions (cross-posted here), Helen De Cruz writes:
Jason Stanley…wrote (in a comment published on this blog a while ago): "I'm reviewing Kieran Healy's citation data, and it reminds me again how weird journal acceptance is. My book *Knowledge and Practical Interests* is the fifth most cited work of philosophy since 2000 in Phil Review, Mind, Nous, and the Journal of Philosophy (book or article). Yet the book itself is the result of three revise and resubmits, and finally a rejection from Phil Review. One of those drafts was also rejected from Mind, and also from Nous. All of those journals have accepted papers discussing, in many cases very centrally, a work those very journals have deemed unpublishable."
I find this very disturbing. I would wager Jason's experience is not some weird outlier. I know several senior philosophers who don't publish in general philosophy journals (anymore) but mainly in their own monographs or invited publications in handbooks etc. The reason is that they find the peer review process is not productive for getting their best work out. The peer review process is geared towards finding mistakes rather than identifying bold new ideas (which invariably always have some flaws), in this way encouraging work that extends existing debates and topics, and discouraging new ideas.
Barry Lam then added:
In my experience, which seems to be corroborated by others on various blogs, the reason the process is so geared toward finding mistakes is because peer reviewers feel that, given what we are told about the selectivity and acceptance rates at these journals, we our sticking our necks out quite a lot by recommending acceptance. Speaking for myself, many peer reviewers are also submitters who have had experiences like Jason's. These experiences make me think "wow, if the peer reviews for my rejected papers look like this, and I thought my paper was deserving of publication (that's why I submitted it!), then my standards for reviewing papers from this journal should be similarly severe, since that appears to be the level that is correct for this journal). It doesn't occur to me enough to think, "Wow, maybe I shouldn't be reviewing submissions in the same way, but reviewing by the standards that I think my papers deserve."
These remarks (and Jason Stanley's example) raise the following question: does our peer-review system focus too much on avoiding false positives (i.e. publishing bad papers), and not enough on avoiding false negatives (i.e. failing to publish good papers)? Stanley's example is — as De Cruz notes — particularly striking. It is one of the most-cited works in the "Healey four" journals since 2000, and it was not considered good enough to publish in them. How many other false negatives are there out there? How many of them have never seen the light of day because they ended up in "bad journals no one reads"?
Anyway, I'm curious to hear what everyone thinks. Does our peer-review process work as it should, at least on average? Are reviewers systematically too uncharitable, and unwilling to take bold, unconventional ideas seriously? I'd be particularly curious to see if there are more examples like Stanley's — examples of really influential papers that were serially rejected by the Healey four or other top journals. Does anyone have similar examples to share? Please do share…the more examples the better!
Leave a Reply to Scott CliftonCancel reply