In our new “how can we help you?” thread, a reader asks:

I wanted to get some thoughts on something I’m feeling a bit anxious about regarding a recent journal submission.

During the submission process, the editorial system had a notice stipulating that any use of AI tools, for any purpose, must be declared in the manuscript. As a non-native speaker, I used an AI tool strictly for grammar checking and language polishing. To be clear: I wrote the entire manuscript myself, but I used the tool to help improve clarity and smooth out awkward phrasing that I might have missed. I followed the rules and declared this use in the manuscript. However, I can’t help but wonder how this declaration will be perceived by editors and reviewers.

For those of you who edit or review: When you see such a declaration (specifying it’s for polishing, not content generation), does it unconsciously incline you to see the author as less competent? Does it raise any flags or make you question the originality of the work, even if the ideas are 100% the author’s? Or am I just overthinking this?

What do readers/reviewers think?

Posted in

25 responses to “How reviewers think about (journal-permitted and) declared AI usage”

  1. Anonymous

    I am just one data point, but seeing a disclosure like this would make me slightly negatively disposed towards the manuscript. For better or worse, I think that would be a lot of people’s reactions. Declaring your use of AI puts the idea in your reader’s head that what they’re reading might not be entirely original, and given the extent to which philosophers are worried about AI right now, I think they might not react entirely rationally. This is maybe unfortunate, but I would also submit that I would personally much rather review a manuscript with some awkward phrasing or unusual word choices than a manuscript that has been extensively ‘smoothed’ by AI.

  2. Anonymous

    I think the question is a bit misstated. None of us are in a position to say anything about our unconscious inclinations.
    Other things to consider: I am not sure how widely this information is shared. I referee a lot, and I WISH that journals would share information about the author’s use of AI. So far no journal has shared such information with me (but perhaps no author has made such a declaration). I am beginning to see some papers that may have used AI … and I do not like. I will probably turn down more requests to referee now. What I don’t like is that the papers are shitty … but shitty in a new way. I do not have time to work with such authors.
    Further, I think people should resist using AI for writing academic work. They should because it does not train you to write better. In fact, as others have mentioned in similar discussions, it works against one learning how to write better.

  3. Anonymous

    I confess that as a non-native speaker, I have mixed feelings about this. On the one hand, I think “linguistic injustice” is a real issue, and when the most influential journals in analytic philosophy are in English, non-native speakers have an unfortunate disadvantage. AI can play a very helpful role here.
    Meanwhile, I also feel that a significant part of analytic philosophy is not about ideas themselves, but about how you phrase those ideas and how you develop those ideas in well-structured arguments. If one uses AI *only* for grammar, that is fine. But we all know that when AI starts to polish and edit, we get into a relatively gray area. AI might rewrite sentences, reorganize the structure, etc., which overlap with what we think as “philosophical skills”. Consequently, the draft might end up with a reflection of the author’s ideas + some part of the author’s philosophical skills + some part of AI’s philosophical skills. I am hesitant to think that this is acceptable.
    Anecdotally, I know that in my home country, how to use AI to polish your papers becomes a part of philosophical training in some graduate programs.

  4. Anonymous

    I am one of the people who thinks it is ridiculous to require disclosing anything that AI does that a PhD Supervisor would also do, since we aren’t expected to disclose the help we got from them. I still haven’t been given a convincing argument that assistance from AI warrants disclosure while assistance from supervisors doesn’t. Perhaps AI is worse at such advising skills, but then this would be reflected in the quality of the writing. However, many anti-AI people seem to imply that it is somehow *more* dishonest to adopt a word choice suggestion from AI than to accept the *exact same* suggestion from one’s PhD supervisor. This seems patently unjustifiable to me. How is the former plagiarism/deceptive/dishonest while the latter isn’t?

    By demanding transparency on AI usage that (i) obviously will negatively bias some editors/reviewers and (ii) cannot be reliably detected (IF AI is indeed ONLY doing polishing), then I think we are actually simply encouraging people to actually be dishonest by lying on their disclosure in a way that is not good for academia.

    For an analogy, consider requiring students to do an online quiz “closed-note” without any form of accountability. Students know that (i) it is extremely easy to simply look at their notes/google, (ii) there is low likelihood they will get caught (provided that they do extensive research into each answer instead of just copy-pasting google answers) and (iii) many of their peers will cheat. Given all three, many (perhaps most) good students will probably cheat in this way, because they will view the policy as unfair and unreasonable. I think most teachers would view this as poor pedagogy.

    Requiring AI disclosures for small things like polishing, in practice, is analogous. In practice then, I anticipate that many people simply will continue to use AI and not report it, and I do not blame them. Given how difficult it is to get published in analytic philosophy journals, and the plausible unreasonableness of the policy, I would not fault anyone for shirking the requirement.

    A more reasonable requirement is one like the journal Philosophy and Technology currently employs. Here is their policy:

    “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.”

    P&T is ranked very highly in terms of its impact factor, and clearly the editors have reflected carefully on this issue. This policy seems very reasonable to me.

    There is a further question, of course, as to whether one should simply choose journals that have better policies like the one above, instead of lying about one’s AI usage at another journal. I personally do think it is morally wrong to submit and lie about the disclosure (or perhaps it is wrong but excusable), at least for early career philosophers who are already at a disadvantage and really did only use AI for copy-editing. But based on my conversations with others I do not suspect that my opinion is the majority one!

    1. Anonymous

      About disclosing what your supervisor does: I’ve never heard of a PhD supervisor going through and rewording/correcting an entire paper. If this did happen I would think something had gone wrong pedagogically, unless the supervisor was also a coauthor of the paper. Also, it’s usual to disclose any feedback you’ve received from people in the acknowledgments.

      On the other hand, using AI to fix prose errors might be more like a beefed up grammar check than feedback from a human, and nobody thinks you need to report using spellcheck. I haven’t used AI myself for this purpose and I doubt there is a sharp line for when such use changes from spellcheck to quasi-authorship. So overall I’m not sure what to think about how disclosure policies should be phrased.

      I am sympathetic to the difficulty of working in a non-native language. I don’t care when I get papers to review that have some ‘unusual phrasing’ (as distinct from typos), but I’ve heard of cases where reviewers get pissy about this. I also really dislike the style of prose normally generated by LLMs, but maybe this isn’t so bad when they merely correct human-produced text. While I’d likely have some negative disposition towards seeing an AI declaration, I would more strongly judge a paper that I could tell used AI but it wasn’t disclosed. A major risk of not declaring AI use is that if a reviewer thinks that the paper may have been AI-generated, the paper will definitely get rejected.

      If possible, I’d suggest that the OP explain that they used AI for text correcting because they are not a native speaker – this would assure me that there was a reason for doing so and the author was not merely trying to take shortcuts.

  5. Anonymous

    I consider that you should acknowledge a PhD supervisor’s contribution to developing your ideas. And if you use something they’ve said word-for-word, this should be specified. Likewise for AI.

  6. Anonymous

    Interesting discussion! I think I’m convinced by the above long argument that disclosure of AI-usage shouldn’t be mandatory, even though, like someone else mentions, we do acknowledge when other people help with the manuscript. I think it’s perhaps not clear what such a disclosure does. If parts of the paper are written by AI, then it seems to me that the paper simply shouldn’t be considered for publication. If the paper is edited by AI but poorly, then that will be reflected in people’s judgment of the overall quality of the paper. If the paper is edited by AI but well, then we as friends of the author may still think this practice is overall counterproductive in the long run etc., but we as readers of the manuscript should have nothing to complain about. I think we shouldn’t use AI at all in writing, but I also think that there isn’t anything accomplished by having an AI declaration in something that the reviewers nevertheless think is still publishable. It’s like saying “these papers have all passed the bar……but some of these are second-class citizens”.

  7. Anonymous

    To add to your data, I agree with first and third Anonymous (above). I’d be negatively disposed toward a submission I was reviewing were I to be told AI was used for word-choice or “polish.” Of course, I’d be much more negatively disposed if I detected it myself and it was not disclosed, in which case I’d report my suspicion to the editors and recommend that some action be taken.

    You should definitely disclose it, even if we do not have a norm of disclosing absolutely every outside source of improvement in your ideas or word choices, such as those of your advisors (who are presumed to be your teachers and collaborators at the dissertation stage; in most disciplines they’re automatically co-authors). If the reasons for AI disclosure are sound — and they’ve been well presented here and elsewhere — then the fact that another special case of free help might be justifiably kept hidden is just noise. All else equal, the permissibility of one outside source of help counts against the permissibility of another (because the more there are, the less we can reasonably credit the author with what we’re reading). So, again, it is wrong not to disclose use of AI, even just for polish.

    I do worry this is unfair to non-native speakers, but I don’t see a justifiable alternative at this point. Maybe Grammarly, although I here it’s starting to get more LLM-like.

  8. Anonymous

    Here is a relevant discussion paper. https://www.tandfonline.com/doi/full/10.1080/08989621.2025.2542197

  9. Avoid

    I agree with some of the comments, and would suggest avoiding submitting to journals with mandatory disclosure policy if this is feasible to OP at all. It’s beyond the anecdotal—there are ample results on all sorts of bias against AI-assisted/generated texts, including judgments about the competence of the human authors and about the quality of the texts. I do worry about AI slops bringing all sorts of problems, but I don’t agree with mandatory closure in the review process (or even afterwards) about responsible AI use in the time of ample biases.

    AI use is an umbrella term containing a world of difference. Imagine someone who dopes to exercise more, rather than slacks off and then cheats in the competition. I have tried to use LLMs to edit my finished drafts rather liberally, and it has taken me so much time to engage with its edit (without a clear result so far), far more time than if I edit by myself. Maybe this is time wasted, or maybe it’s learning experience—in any case, it’s my choice to try and I can’t see anything wrong with it. Why, on earth, should “friends” or “readers” care?

    That being said, bias against AI may still bring a minor advantage for non-native speakers: the imperfections and quirkiness may be valued more than in the past. To non-native students: I encourage you to try AI editing, and I also encourage you to try AI free writing and embrace your own quirkiness.

    1. Anonymous

      I agree totally! It is sometimes said that artists are a bad judge of their own merits, and I think the same is true of philosophers. For many of us, reading our writing is painful because what stands out are the awkward idiosyncrasies and weird choices. But what strikes *the writer* as awkward and ungraceful may strike *others* as the mark of personal style and unique habits of thought. What is so lacking in much AI writing is precisely this: the writing sanded down to an unobjectionable but unexceptional mush of fluent-seeming phrases. In reading the writing of my friends and colleagues, I am often struck by the fact that the things they don’t like about their writing are deeply connected to what I admire about their thinking. I think writers are not well-positioned to assess what makes their writing good and AI writing tools can encourage them to get rid of those aspects altogether.

  10. Anonymous

    As a native speaker of a small language and non-native speaker of English, but with 15 single-authored full-length articles plus my PhD, MA, and two BA dissertations in English, I have never used AI and I hope to avoid it for as long as I can. I would also strongly discourage using it, even if only to smoothen out infelicities in your writing.

    Here are a few arguments:

    First, if you don’t know the language you are trying to write in well enough to write clearly in the first place yourself, how do you know if the AI gives you the right result? You won’t. Instead you are outsourcing your judgement.

    Second, it’s not just that you won’t know, but we also know that AI often get things wrong – in particular when you are using it for sophisticated writing, such as writing a philosophy paper. Personal evidence: I have sometimes had to translate part of my writing into third languages (beyond my native one and English) that I don’t command as well, and then I always double-check with native speakers of that language. It has turned out that automatic translations often get things weirdly wrong. I also see this sometimes when people who are used to larger languages like English or German translate things into my smaller native language: it typically turns out clunky and uncanny valley-level ‘off’.

    Third, much bigger and more generally: I think AI heralds a massive disaster for humanity on several levels, including morally, epistemically, aesthetically, and prudentially. It undermines authentic and autonomous human decision-making, it risks decreasing general intelligence, understanding and competence levels on part of people in general as we fail to learn skills on our own but rather outsource them to the tech, it undermines art creation by generating nonsense material, and it risks putting people out of their jobs… And this is still only where we are at with contemporary “narrow” AI, while a lot of competent people simultaneously warn that “general” AI can wipe us all out. I am increasingly becoming of the opinion that LLMs never should have been invented.

    With this in mind, I think OP’s original questions of: “When you see such a declaration (specifying it’s for polishing, not content generation), does it unconsciously incline you to see the author as less competent? Does it raise any flags or make you question the originality of the work, even if the ideas are 100% the author’s? (…)” can be answered pretty straightforwardly.

    First, I think an author who uses AI for their academic work contributes to a general trend with tremendous negative effects for humanity (point 3). Second, it is likely they won’t know the language they are writing in well enough to do so reliably (points 1 and 2). So, yes, AI declarations raise red flags. Massive red flags. Don’t take shortcuts. Please.

  11. Anonymous

    I think the defenders of AI are missing a key point. Disclosure of AI use is functioning like disclosures of financial support. I think all of you would like it if those doing research on obesity, for example, disclosed that their research was funded by Coca-Cola (this is not a contrived case, at all … I attended a conference with obesity researchers funded by Coke). The fact that Coca-Cola funds the research does not mean the findings reported are false, or that there are other fundamental problems with the research. But, readers do have the right to know this fact.

  12. Anonymous

    I know what copy editing is, but I don’t have a clear idea what “polishing” means, because people use it to mean both copy editing and more substantive interventions. (Though to be fair, that can happen with copy editing too.)

    If you disclose that you used AI to copy edit, compile your references, and that kind of thing, I will be fine with that. If you disclose that you used it to actually “polish” up your paper, however, then I will hold that against you, though you will also get some props for honesty. The trouble is that I don’t know how to weigh the honesty against the polishing, because I don’t know what you think you mean by “polishing”.

    The problem, to my mind, is that it’s clear, from the discussions of the topic I’ve seen online, that philosophers do not have a good, clear idea of which kinds of uses are problematic or downright unacceptable. Indeed, the conversation has revealed, to my horror, that philosophers do not have a very good grasp of good citation practices in the first place (e.g. I was astonished to see someone on DN say that we don’t and shouldn’t acknowledge the RAs who compile our indexes). So I don’t actually trust that when I’m told it was “just polishing” it really was “just” or, indeed, “polishing” of an unproblematic variety. My (undergraduate) students have this problem too: while many knowingly use LLMs to cheat, many just don’t realize that the uses to which they put them are not acceptable.

    And yes, I think that we should acknowledge our supervisors when we publish material to which they made significant contributions. And I’m not sure that supervisors should be regularly rewriting sentences for their students, either, or that their students should then just accept the rewritten sentence.

    Incidentally, I referee work by people for whom English is a non-native language all the time (or, at least, it seems like it). It is generally of a high calibre, even when features of the prose make it clear that English is not the writer’s native language. Sometimes these papers contain quite a few errors (I have done some copy editing on the side for a very long time, so I catch quite a lot). I highlight them myself, within reason, and trust that the author and copy editor will sort out most of the rest. That’s not to say that I have no bias, because maybe I’m blind to it, but I can guarantee that I have a greater bias against LLM use, just because it has significantly degraded my daily working conditions, and because so many professionals just don’t seem to get it.

    1. Michel

      Hmm. I didn’t intend to be anonymous. This is the commenter who normally signs off as ‘Michel’.

  13. Anonymous

    Fwiw, I think you probably should have just lied. I think that non-native speakers are discriminated against in publication in virtue of not having “idiomatic English” (whatever exactly we want to say that is). I think its perfectly permissible to use AI tools to try to overcome that discrimination. Since there’s nothing wrong with using AI to overcome that discrimination, I see no reason why you would have to report it–especially when doing so could produce a negative impression on your manuscript.

  14. Anonymous

    An observation based on the preceding:

    1) “You are wrong if you do not disclose.”

    2) “I would hold an AI disclosure against you, depending on what you disclose.”

  15. OP

    OP here. Thanks, everyone, for all the helpful discussion and suggestions. I’d like to offer a few clarifications regarding my original question.

    To be clear, when I say “polishing,” I am referring specifically to using AI to detect inappropriate word choices, refine awkward sentence structures, and eliminate unnecessary wordiness. As a professional philosopher, I am perfectly capable of evaluating the AI’s output and can easily spot when its suggestions misinterpret my original meaning.

    My competence in academic English writing isn’t really the issue. I have several publications in reputable English-language philosophy journals that were written long before AI tools became a widespread concern in academia, all without such assistance.

    I recently chose to “polish” my work with AI tools primarily because I share the worry, echoed by some commenters, that editors and reviewers may hold a conscious or unconscious negative bias against imperfect English.

    To be honest, I’ve noticed this tendency in myself; I am sometimes personally more inclined to view a student’s coursework negatively when the writing is flawed to some degree. I simply assume that editors and reviewers are not immune to this sort of bias either.

    1. Anonymous

      I am also a non-native English-speaking philosopher, and even when I use an AI for language, it’s almost for checking whether there is no awkwardness I haven’t noticed. I deeply sympathize with the OP. I am saddened by the strict negative reactions to AI use for language in this thread, which indicates that non-native English speakers should continue to pay for the expensive copy-editing services (which is often worse than AI) if they ever want to check possible awkwardness without inviting enormous negative reactions (that is not based on the quality of the paper).

    2. avoid

      “My competence in academic English writing isn’t really the issue.” OP, I get you. I have found that when it comes to AI-related online debates, awkward arguments, misinterpretations and odd assumptions (and oddly strong emotions) are commonplace. It’s one of the topics that I haven’t found online discussions particularly helpful, and started to disengage. Glad that you still find some helpful suggestions though.

  16. Anonymous

    A few things:
    I am surprised how many admit that they would be negatively disposed towards papers that use AI for copyediting. I have used it to grind down some baroque formulations (non-native speaker, you guessed it), and I am appalled that people would judge my work based on this, rather than the actual thinking. Of course, we should be responsible authors and disclose this use when asked to, but you should also be responsible reviewers and judge arguments and papers on their philosophical content.

    Also many seem to seriously underestimate the disadvantage of being a non-native speaker. I am all in favor of linguistic quirks and whatnot and against the leveling down that comes with excessive use of LLM copyediting. But the idea that non-native speakers should now have a small advantage is ridiculous. When reviewing, and I am allowed to see the other reviews, I often see people being scolded for non-idiomatic formulations. Even in papers that struck me as being perfectly clear and well formulated.

    Here’s a thought experiment: Suppose your second language was hegemonic and that you had to publish in highly selective journals in that language for tenure and promotion. Now imagine that a widely available tool could easily help you iron out some of your grammatical mistakes and weird formulations. Finally, imagine that gatekeepers to these journals (all native speakers, of course) would perceive any usage of these tools very negatively. Does that strike you as fair?

    1. Anonymous

      I completely agree with this. I hope more people can accurately represent and appreciate the situation of non-native English speakers.

    2. Michel

      Unless I missed it, nobody has said they oppose it for copy editing. It’s “polishing” and more that are causing trouble.

    3. Anonymous

      Non-native English speaking philosopher working at an R1 here. I absolutely agree with what is being said. My reviews always include something along the lines of “this is an awkward phrasing” or “an infelicitous expression.” Thus, I have worked hard to memorize the “American way” of arguing.

      My last rejection said that my paper was “readable, but quite schematic.” One can never win.

  17. Anonymous

    (A different anonymous.) I am a hardliner against AI-use in writing and have defended this position in print. I would prefer journals treated disclosures of using AI to produce content as they would disclosures of having plagiarized. However, I would make one exception: (a) If AI was being used for linguistic help, (b) by a non-native speaker, and (c) the disclosure makes crystal clear it has not been used to generate content/words/ideas. I think this use is like getting somebody else to help you edit, not getting somebody else to write for you.

    Nevertheless, I don’t consider my position on such statements as relevant to my job as a reviewer. I would work not take to them into consideration since they are a matter for the journal’s policies. In fact, I would hope such disclosures would not be passed on to me by the editor.

Leave a Reply to avoidCancel reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading