Yesterday, this Google Scholar report on h5-index statistics of philosophy journals started going around my social media feed, and a few questions occurred to me: what, if anything, should we take statistics like these to indicate? Should they lead people who rank journals, or use journal rankings in hiring, tenure, and promotion, to revise how they think about these things? Although I tend not to care much about rankings in general, I still found myself curious about these questions.

In brief, my understanding is that in STEM fields, disciplinary journal rankings tend to follow journal impact-factors–that is, that people in those fields tend to identify the 'best journals' with ones that have the biggest impact in the literature. In philosophy, however, impact-factors rarely seem to be mentioned. Many journals do not even appear to report them. I found the above Google Scholar report interesting, in that although many highly-ranked philosophy journals appear in the list (with Nous and PPR in particular featuring at the top of the Google Scholar list), the list as a whole diverges fairly significantly from common journal rankings in the discipline.

One reason I found myself curious about this is that it is not clear what kind of validity subjective polls on journal-rankings have. If those kinds of common rankings merely represent people's beliefs about various journals' reputation, there is always the further question of whether a given journal's reputation is accurate. On the other hand, impact factors (such as h-indexes) merely reflect how often a journal's work is cited–which is a very imperfect (at best) measure of the quality of work. Further, journal citation-ranks may in part reflect the sheer number of articles a given journal publishes (as some journals publish far more articles than others).

Anyway, I don't have much of a well-developed view about any of this–aside again, from being skeptical about rankings in general. Still, I'm curious what others think.

Posted in ,

10 responses to “Citation rates and journal rankings”

  1. Anon

    I have the impression that the statistics per se do not mean much. There are many ways to game the system of h-index (see for instance AJOB). However, I have a little bit of experience in publishing and I have to say that having a paper accepted in ‘top-journals’ (in my AOS) was generally very difficult, while in ‘low-ranked’ journals was pretty easy.
    I know it is a flawed generalization, but my experience so far is that the editors of top journals are really demanding. I don’t know if this implies a better quality – I’m tempted to say no, but my best papers are the one published in top journals, and they are the best especially because of the work of editors and referees.

  2. philosophy of cog sci postdoc

    Several of the journals that rank highly on Google Scholar metrics but lower in survey-based rankings publish a lot of interdisciplinary philosophy of science. A cynical way of interpreting this would be to attribute the higher impact of these journals to the more exhaustive citation practices of the sciences (e.g. it’s easier to get cited if you’re writing about cog sci, even if your work isn’t that great, because psychologists cite everything). A more positive take on it might be that these journals are genuinely more impactful, in the sense that their articles are read by a much broader audience. In contrast, a journal like Phil Review (which is usually in the top 3 or 5 in survey rankings) probably ranks lower on this metric because not that many non-philosophers care to read it. Whether or not having non-philosophers read and cite philosophy articles matters to you probably depends on what you think philosophy should be about, and the relationship that philosophy should have with other disciplines (especially the sciences). I personally think it’s awesome that philosophers publishing in journals like Phil Psych are able to reach such a large audience, but I do interdisciplinary work, so I’m probably biased.
    I can see how someone working on a less interdisciplinary topic might be justifiably opposed to having the value of their work measured according to such a standard, however. Just because a journal focuses on topics that are only of interest to philosophers (and thus gets cited and read less often) doesn’t mean the work it’s producing is less important or valuable, and it would be really unfair to base hiring, tenure, or promotion decisions on such a comparison.
    So maybe the take-away from this should be that the wide range of subject matter covered by philosophy journals makes ranking them in this coarse-grained way pretty uninformative?
    Also, I think it’s a good thing that some of these journals (e.g. Synthese and Phil Studies) publish relatively numbers of articles. Clearly this strategy yields a pretty high overall number of impactful papers. Maybe that’s counterbalanced by a larger number of articles that are less impactful, but so what? If all these journals instead adopted the overly selective publication practices of JPHIL or PhilRev, we’d probably have far fewer good articles to read, and philosophy would be much worse off.

  3. philosophy of cog sci postdoc

    *That should read “relatively HIGH number of articles”.

  4. Michel

    With respect to the list itself, it’s worth noting that Hypatia is classed under ‘Feminism & Women’s Studies,’ and would easily make the philosophy list, with an H5 of 23 and an H5 median of 36.
    Looking around, I found other philosophy journals being weirdly classified. The Journal of Aesthetics and Art Criticism (which is a top venue for the philosophy of art, and publishes zero art criticism), for example, is classed under ‘visual arts’ (although its H-indices are too low for it to make the philosophy list anyway); the British Journal of Aesthetics is nowhere to be found.
    Quite a few other philosophy journals show up in the ‘social sciences’ category (including Ethics).

  5. JA

    Anon: how does AJOB game the system?

  6. Anon

    JA: I didn’t mean this in as bad way, sorry for the confusion. AJOB has a lot of target articles, so this means that a few articles get cited a lot, just because they become targets of commentaries. There are other journals that have target articles, but AJOB does this quite often. Does this make sense?

  7. Joshua Mugg

    I was surprised that Phil Psych was here and not other interdisciplinary cog sci journals where philosophers publish. So I took a look at the ‘cognitive science’ journal impact factors, and sure enough, Cognition and Trends in Cognitive Science have higher h5-impact factors that Phil Psych. This is likely because google doesn’t see these journals as philosophy journals, even though philosophers do publish in them.
    Compare here: https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=soc_cognitivescience

  8. Anon

    AJOB’s journal structure is a main article and reply structure. There is one or two main (“target”) articles, and then the other articles are replies to the main one. I actually think this is a cool format for a journal, as it motivates engagement. I am not sure I would call it “gaming the system,” as that seems to imply the whole point of having this structure was to improve their impact numbers, (I guess this could be true) however, perhaps the editors just thought the structure was valuable. That said, you could argue the impact numbers are in some sense artificially inflated since obviously the reply pieces will cite the main article. On the other hand, there are other journals that have a decent amount of reply pieces, which would have the same kind of consequence.

  9. Chris

    British Journal for Aesthetics has an H5 index of 8 (and an H5-median of 11), The british journal for the philosophy of science has an H5 index of 31 and Philosophy of Science has one that is 27. So the latter two would be around the “top 5” if they were included in the philosophy journal list.
    There are likely a variety of reasons for this, including different citation practices, but also that these articles are sometimes of interest to non-philosophers (and also, perhaps, differences in numbers of people in sub specialties around the world). There are, after all, a lot more scientists than philosophers – so if you can do work that is of interest to even a small percentage of them, you’re likely to get much higher citation rates.

  10. Michel

    Chris: Right. Given that the BJA gets 8&11, that should put it in 9th place on the Google list for ‘Visual Arts’. But it’s not there, which is weird since the JAAC is (even though it’s not an appropriate classification), and I don’t see it elsewhere in the Google rankings.
    Shrug. It’s just a weird list. I only meant to point out that the list of philosophy journals is likewise affected by these classification decisions, so I wouldn’t read much into its divergence from our more usual poll-based journal rankings.

Leave a Reply to JACancel reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading