A few years ago, I floated the idea that perhaps we should judge philosophical research (and researchers) less on the basis of venue (i.e. journal prestige) and more on the basis of philosophical impact (i.e. citation-rates and discussion). These thoughts were inspired by my sense at the time that a significant number of published articles in highly ranked journals were not being cited very much, let alone actively discussed in the literature in an impactful way.
These thoughts were rekindled the past couple of days by two social-media posts by philosophers I know. First, one philosopher posted the following chart that Kieran Healy put together based on citation-rates of articles in four of our highest-ranked journals from 1993-2015:
Among Healy's more striking findings are that over this 22-year period:
- The mode number of times that published articles in those four journals were cited was zero.
- Most articles appearing in those journals were cited no more than a handful of times.
In other words, as highly ranked and prestigious as these journals may be, it appears that a majority of their published articles make a small philosophical impact (viz. citation and further discussion in the literature).
Notice–just to be clear–that this isn't to say that the journals in question shouldn't be so highly ranked. After all, the data is entirely consistent with the hypotheses that these journals tend to publish better and more "game-changing" work than other journals. What it does suggest, however, is that one "shouldn't judge a book by its cover"–that perhaps we should evaluate work less on where it appears, and more on whether it makes any discernible impact in the discipline. Alternatively, since there are plausible sociological worries regarding why a given author or work makes an impact (e.g. author prestige, etc.), perhaps it suggests that we should all read and cite more work–as there are serious epistemic issues with prevailing norms.
In any case, just today Brian Weatherson posted another intriguing set of findings, this time on citation trends of articles published in five highly-ranked journals (Mind, Nous, JPhil, Phil Review, and Phil Studies) from the 1976 to the present. Here are the trends:
Fig 1. Citation rates in 31 journals
Fig 2. Citation rates across all journals in the Web of Science Arts & Humanities Index
As Weatherson notes, the most striking thing in both data sets is how citation-rates of articles appearing in the four most prestigious journals (JPhil, Mind, Phil Review, and Nous) have dropped precipitously over time, while citation-rates of Phil Studies articles have climbed. Further, as Weatherson also notes, this cannot be attributed primarily to Phil Studies publishing substantially more articles than the other journals, as this feature of Phil Studies relative to the other journals has been constant over time. Rather, what appears to have happened is that the average Phil Studies article has increased its impact on philosophical discussion, while the opposite has occurred for the other four journals. As Weatherson writes, "A big part of the story is that Philosophical Studies went from having its not-so-huge articles get 5 citations a piece, to getting 10 citations a piece, and that adds up."
What should we make of these findings? I don't know — which is why I thought I might simply open things up for discussion!



Leave a Reply to Pendaran RobertsCancel reply