A few years ago, I floated the idea that perhaps we should judge philosophical research (and researchers) less on the basis of venue (i.e. journal prestige) and more on the basis of philosophical impact (i.e. citation-rates and discussion). These thoughts were inspired by my sense at the time that a significant number of published articles in highly ranked journals were not being cited very much, let alone actively discussed in the literature in an impactful way.

These thoughts were rekindled the past couple of days by two social-media posts by philosophers I know. First, one philosopher posted the following chart that Kieran Healy put together based on citation-rates of articles in four of our highest-ranked journals from 1993-2015:

  18301099_10156117569054115_7380493427980088312_n

Among Healy's more striking findings are that over this 22-year period:

  1. The mode number of times that published articles in those four journals were cited was zero
  2. Most articles appearing in those journals were cited no more than a handful of times.

In other words, as highly ranked and prestigious as these journals may be, it appears that a majority of their published articles make a small philosophical impact (viz. citation and further discussion in the literature).

Notice–just to be clear–that this isn't to say that the journals in question shouldn't be so highly ranked. After all, the data is entirely consistent with the hypotheses that these journals tend to publish better and more "game-changing" work than other journals. What it does suggest, however, is that one "shouldn't judge a book by its cover"–that perhaps we should evaluate work less on where it appears, and more on whether it makes any discernible impact in the discipline. Alternatively, since there are plausible sociological worries regarding why a given author or work makes an impact (e.g. author prestige, etc.), perhaps it suggests that we should all read and cite more work–as there are serious epistemic issues with prevailing norms.

In any case, just today Brian Weatherson posted another intriguing set of findings, this time on citation trends of articles published in five highly-ranked journals (Mind, Nous, JPhil, Phil Review, and Phil Studies) from the 1976 to the present. Here are the trends:

Fig 1. Citation rates in 31 journals

Citations_by_Year_big_5-768x794

Fig 2. Citation rates across all journals in the Web of Science Arts & Humanities Index

Citation_by_Year_Big5_All-768x794

As Weatherson notes, the most striking thing in both data sets is how citation-rates of articles appearing in the four most prestigious journals (JPhil, Mind, Phil Review, and Nous) have dropped precipitously over time, while citation-rates of Phil Studies articles have climbed. Further, as Weatherson also notes, this cannot be attributed primarily to Phil Studies publishing substantially more articles than the other journals, as this feature of Phil Studies relative to the other journals has been constant over time. Rather, what appears to have happened is that the average Phil Studies article has increased its impact on philosophical discussion, while the opposite has occurred for the other four journals. As Weatherson writes, "A big part of the story is that Philosophical Studies went from having its not-so-huge articles get 5 citations a piece, to getting 10 citations a piece, and that adds up." 

What should we make of these findings? I don't know — which is why I thought I might simply open things up for discussion!

Posted in ,

3 responses to “Journal prestige and philosophical impact”

  1. NK

    Here’s a hypothesis: Phil Studies publishes “better” papers (as measured by impact), on average, than the other four journals mentioned here precisely because it publishes more papers.
    Corollary: Trying to keep a journal’s acceptance rate low tends to lead to the publication of “worse” papers, on average.
    The moral: journals should publish more papers.

  2. Sam Duncan

    I wonder how publishing in open access journals like Ergo or Philosophers’ Imprint will affect citation rates? I know we don’t have data going back too far since such journals haven’t existed for too long, but I’d be very interested to see what the data looks like going forward. For a lot of us working at SLACs or community colleges we might not have access to that many journals and so if getting something involves an interlibrary loan or finding a way to jump a pay wall we might not read it unless we absolutely have to or it seems incredibly interesting (I’m speaking from personal experience here). People should remember that not every philosopher works at an R1 with enough money to get access to every journal. For that reason I think the field needs more open access journals.

  3. Pendaran Roberts

    The top 5 journals to my knowledge have excessive rejection rates (a few percent accepted). What kinds of papers make it through that over the top process? I’d suspect papers that don’t do much original or don’t do much anyone can complain about. In my experience referees are seldom charitable and only look for reasons to reject. They don’t want the hassle of dealing with an R&R.
    Philosophical Studies publishes far more and so must have loser standards. That very well may result in more interesting and contentious papers making it through. Any paper in philosophy worth reading, I suspect, must use contentious assumptions and set certain contentious aspects of what it is doing aside. The only way not to do this is to have a very narrow paper that few will care about and fewer will read.
    So, I guess I’m basically agreeing with NK.
    However, it’s just a guess!

Leave a Reply to Pendaran RobertsCancel reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading