In our new "how can we help you?" thread, a reader asks:
Lately, I’ve been trying out ChatGPT-4o on some of my own manuscripts—just asking it to summarize the papers and see what it picks up. To my surprise, it actually does a pretty good job. It seems to understand what I'm trying to say, and in some cases, it even puts things more clearly and accessibly than I did in the original.
What really stood out to me is that it doesn't make the kinds of major interpretive mistakes I sometimes see in human referee reports. That got me thinking: has anyone else had a similar experience using LLMs like this? Is this kind of thing common, or am I just being overly impressed because it's "getting me"?
Also, I wonder what people think about using LLMs to help with referee work. I’m definitely not saying referees shouldn’t read the paper themselves—but given how easily LLMs can spot structure and summarize arguments, I’m curious whether others see a role for them in helping us avoid misreadings or blind spots.
Would love to hear people’s thoughts or experiences.
I haven't used AI for these purposes. I'm curious to hear from readers and wonder what people think about the ethics of using them to help avoid misreadings as a referee (provided one doesn't defer to them or use them in writing a referee report). One serious concern I have here is that uploading an unpublished paper to an AI in effect shares that content (an author's intellectual property) with AI companies without the author's consent.
What do readers think? Have any of you used AI to summarize or "referee" your own papers to help you refine them in your research process? Do you find, like the OP, that they tend to avoid misreadings? Etc.
Leave a Reply to AlekseiCancel reply