Yesterday, this Google Scholar report on h5-index statistics of philosophy journals started going around my social media feed, and a few questions occurred to me: what, if anything, should we take statistics like these to indicate? Should they lead people who rank journals, or use journal rankings in hiring, tenure, and promotion, to revise how they think about these things? Although I tend not to care much about rankings in general, I still found myself curious about these questions.
In brief, my understanding is that in STEM fields, disciplinary journal rankings tend to follow journal impact-factors–that is, that people in those fields tend to identify the 'best journals' with ones that have the biggest impact in the literature. In philosophy, however, impact-factors rarely seem to be mentioned. Many journals do not even appear to report them. I found the above Google Scholar report interesting, in that although many highly-ranked philosophy journals appear in the list (with Nous and PPR in particular featuring at the top of the Google Scholar list), the list as a whole diverges fairly significantly from common journal rankings in the discipline.
One reason I found myself curious about this is that it is not clear what kind of validity subjective polls on journal-rankings have. If those kinds of common rankings merely represent people's beliefs about various journals' reputation, there is always the further question of whether a given journal's reputation is accurate. On the other hand, impact factors (such as h-indexes) merely reflect how often a journal's work is cited–which is a very imperfect (at best) measure of the quality of work. Further, journal citation-ranks may in part reflect the sheer number of articles a given journal publishes (as some journals publish far more articles than others).
Anyway, I don't have much of a well-developed view about any of this–aside again, from being skeptical about rankings in general. Still, I'm curious what others think.
Leave a Reply to JACancel reply