In our most recent "how can we help you?" thread, a reader asks:

I noticed that the Leiter poll journal rankings are pretty consistent throughout the years in how certain journals are ranked. I was curious if there's anyone out there who'd care to share if they have a strong disagreement about the rankings of some of the journals there, and even some unranked there.

Here are Leiter's 2022 and 2018 poll results for "general" philosophy journals, and his 2022 and 2018 polls for journals specializing in moral and political philosophy.

Does anyone have strong disagreements?

Posted in

30 responses to “Thoughts on Leiter-poll journal rankings?”

  1. grad student

    Not a strong disagreement, but my own sense is that Synthese has been a bit overrated and Phil Imprint has been a bit underrated. I would at least rank Phil Imprint over Phil Studies. But these differences are still pretty minor I guess.

  2. North American Political Philosopher

    I have only one really strong disagreement: how low European Journal of Political Theory is ranked in the 2022 moral/political philosophy ranking.
    For people working on political philosophy, it’s a solid second-tier journal. Professors and graduates from top political science/politics departments regularly publish in it (and of course, there’s good stuff in there).
    [disclosure: I have not published in EJPT]

  3. Journal snob

    I’d say that PPR and Phil Studies are both overrated. Anecdotally, I have heard that they’ve both taken to relying on reports from a single reviewer. That might be a product of the referee crisis editors all talk about, and maybe it allows them to give authors feedback in something resembling a timely fashion (though I have heard lots of Phil Studies horror stories), but it’s not a recipe for quality. I’d also say that Ergo is underrated and that it will probably rise to the top 10 next time we see a poll.

  4. Assc prof

    I agree that Imprint seems underrated (I’d rather publish there than even AJP). Ergo is also underrated, it seems. Synthese and APQ seem overrated to me, as does Phil Studies (which I see as a place primarily for clever grad students to cut their teeth).

  5. Michel

    When I reflect on how I go about ranking the generalist journals, it becomes clear that my judgements are based mostly on my sense of received wisdom in the field, rather than any careful assessment of each journal’s output. I almost never read anything in PhilReview, Nous, or Mind, for example, because they publish almost nothing in the areas I work on. As a result, I also seldom referee for them, so I don’t have much to go on except received wisdom. And I almost never send them any of my work, because they’re just not good venues for that work, so there’s no point waiting six months or more for nothing. And while I have no problem deferring to received wisdom, I wonder to what extent my situation is true of everyone else ranking, too.
    By contrast, I’ve refereed a lot for Synthese, AJP, and PhilStudies, have published in them, and regularly see stuff that interests me in their pages. I have a much better grasp of how good they are. But again, it’s not a very fine-grained grasp, and it probably owes a lot to the pre-existing ranking, because they’re still just regular but occasional publishers in my area, and I don’t spend time reading every new issue cover to cover.
    The only ranking judgements I’m confident I can make quite accurately are for the specialist venues in my areas. There, I can even speak to shifts in quality in particular journals over the last year or two, which I can’t do for any generalist journal.

  6. Imo

    My favourite general journals at PPR and Nous, which regularly publish the papers I’m most interested in. Phil Review rarely publishes things I’m interested in (though it has started to shakes its head”Philosophical Review of language” label).
    Phil Studies is certainly overrated and Phil Imprint is certainly underrated: I agree with Assc Prof that I’d place it above AJP.
    I wouldn’t judge a journal by how many reviewers it requires.

  7. Assistant Prof

    I think the Leiter general journal rankings are fairly good. It can be a bit hard to rank journals like Phil Issues, Phil Topics, Phil Perspectives and Proceedings of the Aristotelian Society against the others, since we’re often comparing invited publications against refereed publications. Other than that, I don’t have many disagreements with the rankings.
    I don’t think it should be surprising that the rankings are fairly static. Better journals get more and better submissions, so they have to be badly mismanaged to drop much in the rankings.

  8. Amma

    Maybe I’m cynical but I tend to assume that people commenting, and for that matter voting, anonymously will try to boost the reputation of the places they publish. So unfortunately I don’t think you’ll get reliable answers like this.

  9. Not impressed

    In my area, all the papers published in mind, ppr and Phil review are well-argued but boring as. All the exciting and inspiring stuff appears to be happening at lower ranked generalist journals or specialist journals. On the basis of this impression, I can’t imagine thinking these are our top journals. But perhaps if I worked in a different area of philosophy I’d have a different impression.

  10. Assc prof

    Amma,
    Of the journals I mentioned, I’ve only published in AJP, which I said should be below Phil Imprint (where I’m 0/4). Never submitted to any of the others I mentioned,except Ergo.

  11. Canadian Postdoc

    I probably read, think about, and cite more work in Synthese than in any other journal – though it mostly just seems to work out that way – it’s not as if I’m flipping through the pages of any journal looking for new papers to read. So, I’m curious as to why people here seem to think that it is overrated or slipping?

  12. Mahmoud Jalloh

    Assuming that the rankings track some proxy for quality, I would put forward that generally we can expect diamond open access journals like Phil Imprint and Ergo to climb the ranks quite regularly. There are many reasons one does and should prefer to publish in such a journal, so we can expect people to increasingly submit their work first to DOA journals (it’s certainly a consideration when I am submitting papers).
    There seem to be two barriers for such journals though:
    (1) Insufficient prestige momentum: Journals like Organon F, the European Journal of Analytic Philosophy, and Critica (for example) haven’t seemed to have yet had a “breakthrough”, though special issues are giving them some of them more prominence in my eyes at least. Places like Ergo are located within prestige centers and so have an easier time usurping the old generalist journals.
    (2) Questionable management. This is a pet curiosity of mine and not really meant as a call-out, but I have no idea what has been happening with Dialectica the past couple of years. I think their DOA move would have quickly made them a top journal given their historical stature if it seemed like they were even publishing things anymore…

  13. For social and political philosophy, two of my favorites (just as good as anything else) are Journal of Ethics and Social Philosophy (and I think a lot of people agree with me) and Critical Review of International Social and Political Philosophy (here I think my judgment is more idiosyncratic). The two new journals replacing JPhil and P&PA are, I think, the heirs to the reputations of the journals they are replacing. The zombie P&PA is an unknown entity at this point, for me at least.

  14. Niles Crane

    I think Imprint is more consistently good than most journals, but that consistency is achieved by a bias toward more conservative papers. Something similar seems true of Quarterly and Phil Studies. Synthese is inconsistent, this is why people say it is slipping. They publish some papers that shouldn’t be published. But they also publish some really good and really ambitious papers that might not have been conservative enough for prickly referees at other journals. Some referees don’t like papers with premises that a reasonable person could reject, so they reject innovative but controversial stuff.
    I’m a bit surprised to see people say PPR is slipping or underwhelming. At least in my areas, they are consistently good and they publish bold papers. They seem similar to Nous so far as I can tell. J Phil is the top 5 journal that’s slipping. Too much of the review process is done in house.
    I agree with everyone who says Ergo is ascending. They’re consistent, they publish some really innovative work, and every referee report I’ve received from them has been prompt and fair (all rejections, in my case).

  15. Andrew Allison

    I wonder if it has something of a self-reinforcing power. If a journal is ranked highly on Leiter’s blog, then it gains a reputation as being a good journal which, in turn, gets that journal more (and better?) submissions producing lower acceptance rates and better published papers, resulting in it being ranked highly whenever the next poll comes out.
    In part, I think that the poll has this power because there are just very limited resources for determining what “good” journals are. As a graduate student, I’ve even been given the advice of “Whatever Leiter has up there is basically right”. Though, I’ve also heard remarks about how bad the ranking is from others. Frankly, I take the 2022 rankings pretty seriously because I don’t have much of a better way to determine where to send papers.
    All this to say that if the polls continue to be consistent, I would guess that, in part, it is because of the power that the ranking itself has in shaping both the actual and perceived qualities of journals.

  16. Douglas Wadle

    Editorial boards being what they are, we should expect any given journal to be stronger in some areas than others – even core areas. And that’s exactly what I see in the areas in which I work. I always thought that a ranking of journals (general and specialist together) by specialty would be far more instructive, if the goal is to help people identify where the best work in their field is being published.

  17. Mickey Mouse

    It it worth remembering that the so called general philosophy journals are epistemology and metaphysics journals. They do not really publish history of philosophy, philosophy of race, applied ethics, feminist philosophy, aesthetics etc.

  18. Cap

    I don’t even know how people are ranking these journals. Have you read more than 10% of what they publish annually? If not than what are you basing this on? It simply can’t be anything than an impression of the prestige accorded to them by others. So I have no idea why people here are pretending they have independent opinions on how e.g. AJP compares to PhilImprint or Mind compares to JPhil.
    I’m sorry but much of this seems like nonsense. Except between echelons, there is no good answer to how individuals compare and, in my view, the sooner people recognise this the better.

  19. Cap

    *for how individual journals compare

  20. Rep Sample?

    Interesting point, Cap! Like, who has a representative sample? Anyone? (If not, is there a reliable epistemic shortcut?)

  21. ChastenedAuthor

    I will say that the reports I’ve gotten from Ergo have been good–very good, in fact. One review was so good that the reviewer actually took the method one step too far, to where the next paper will go. The reviewer R’d me on account of that.
    So, while it’s frustrating that I got R’d based on an argument that’s actually a different paper, the reviewer obviously read the paper and engaged with the argument and methods thoroughly enough that s/he extended the argument.
    On the flip side, it grew my confidence in the paper’s methods, because right until the reviewer took the argument one more step, their calculations agreed with mine.

  22. Journal snob

    Cap: you’re right that few people have read enough from these journals to have an independent sense of the quality of the papers they publish. Nor should anyone want to! But you can know something about their editorial policies and practices, which do differ across journals. Does the expertise of their Associate Editors cover a broad or narrow range of topics? Are submissions single, double, or triple blind? Do they do a lot of desk rejections? How many referees do they require? What percentage of submitted papers do they ultimately accept? Do papers go through an additional round of review through the editorial board before final acceptance? Do editors give their own comments or just defer to referees? I don’t deny that prestige plays a huge role in these rankings, but there are meaningful procedural differences in the editorial process that should make you more or less confident in a journal’s products.

  23. Cap

    @ Journal snob
    I agree with you. However, I don’t think that is what people are talking about when they rank journals. Going by procedural criteria would also produce a very different ranking than the received one as you imply further above. But you make an important point that there are measures we can use to predict the quality of a venue, apart from evaluating its outputs

  24. assc prof

    One indirect measure: where do the younger faculty at the best departments publish? Since their tenure requirements are quite strict, you can assume that they’ll always try for (and often succeed) at the journals which will best help their tenure cases. Since there is likely some connection between what helps their tenure cases and what publishes good philosophy (by some widespread if not ubiquitous standard), we can use CVs to get a rough guess at quality of journal without necessarily reading much from the journals in question.

  25. Michel

    I just don’t think anyone doing the ranking is putting that much detailed thought into the comparisons. I suspect we’re mostly operating by appeal to convention.
    Depending on how the conventional ranking was established, and how it maintains itself, that might be good enough. But it should perhaps give us some pause.
    As I said above, I have much more faith in rankings of specialist venues.

  26. Cap

    Younger (in the process) faculty submit to wherever they are led to believe is the best spot and a function of likelihood to get accepted, review time, and prestige. There is good reason to think prestige correlates very roughly with quality but only because of selection effects- to an extent where people send their best work.
    So “ranking“ precedes quality not the other way around

  27. Mike Titelbaum

    One of the controversies that’s arising here is what we mean by “quality”. Another is how ranking interacts with quality (whatever it is). Just to add an aspect of the latter that I don’t think has been mentioned yet: Referees apply different acceptance standards to submissions depending on how prestigious the journal is. I’ve had referee reports saying, “This is a good paper, but its level of contribution isn’t quite high enough for [this prestigious journal]”, and I’ve written similar reports as well. So even setting aside how many submissions a journal gets, difficulty of acceptance and (perceived) ranking are causally related in both directions.

  28. academic migrant

    I really like PPR. It publishes some excellent stuff related to my own research. If it is, as someone mentioned above, single-refereed, then it seems to give credence to the idea that one referee is a feasible model.

  29. Niles Crane

    @ academic migrant: PPR is not generally single-refereed. They leave it to the discretion of the AE how many reviewers to assign. Sometimes it is 1, but it can be as many as 3.

Leave a Reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading