In our newest "how can we help you?" thread, a reader asks:

I'd like to hear people's thoughts about AI researchers. I know of a peer-review-accepted paper written by an AI researcher in computer science, and it seems to suggest that automated science is becoming a reality. Could the same thing happen in philosophy? Could AI researchers automate philosophy and replace us? I'd be interested to hear people's thoughts on this.

(*Just to clarify, I'm not talking about students cheating, but about professional philosophy)

Anyone have any thoughts on this?

Posted in

7 responses to “Thoughts on “AI Researchers”?”

  1. hmm let me try

    Okay, I guess I can throw out some questions first, as someone who has been quite neutral between AI-love and Ai-hate.
    the paper does not actually reached a publishable standard as noted by the experimenters. so the question would be whether the ability too produce publishable papers will plateaued or continue to be improved fast.
    the current AI ability in writing philosophy papers lags a lot behind science, so a question I have is whether there is any difference in these areas regarding automat-ability, or it is just a matter of vastly imbalanced amount of training.
    questions aside, I am neither optimistic nor pessimistic about the future of human research, although this may not sound helpful…

  2. PhD Candidate

    There is a certain kind of philosophy paper that makes the same kind of predictable moves about the same sorts of arguments that are very formalized and not particularly groundbreaking. I think AI will probably be able to write these at some point, especially reply pieces that focus on one individual argument.
    But so many philosophy papers are not like this, and I am not sure (perhaps even doubtful) whether AI will ever be able to convincingly produce essays like the kind Cheshire Calhoun or Susan Wolf might write (and that is the kind of philosophy I prefer to read and emulate!).

  3. e

    If it’s not worth writing, it’s not worth reading

  4. Ben

    We should ask ourselves about the purpose of publication. Philosophy as a discipline already suffers from major publication delays, so what do we get if in the future we have a bunch of AI-researchers also submitting to congested journals? The AIs will no doubt produce more papers more quickly than humans can. But aside from publication delays, we are already swamped with more literature than we can read and absorb.
    If future AI can do a type of research of a specific nature or quality that exceeds to what humans are typically able to do, and if that work moves forward philosophical inquiry or pedagogy in meaningful ways, then that might be valuable. On the other hand, if it’s just about producing AIs that can mimic a human philosopher and produce exactly the kinds of papers we ordinarily publish, what’s the point of that? The circumstance would be just like producing more human philosophers. And we already have way more human philosophers than we can employ.
    I think AI tools have some good uses (though full disclosure, I refuse to use them in my daily life) but I don’t think “imitate human research” is a particularly helpful goal. AI would be better be used to address actual disciplinary needs, such as decreasing editorial burden by partially automating referee finding, providing live captioning or translation during talks, or organizing 300 conference submissions into topically coherent sessions.
    Some people seem to think that it’s inevitable for AI to take over lots of current human work. But we are in charge of making the rules and regulations. Journals are free, for instance, to have a policy that they do not accept papers written by AIs. So the answer to the poster’s question is not about predicting the future, it’s about our own collective decisions.

  5. ATG

    I think no one knows the answer to this question. What seems clearer, I think, is that certain fields, including philosophy, might be better protected from the AI ‘menace’ than others. As far as I know, some AI is now able to write math proofs for example.
    Ultimately, of course, as in many other steps of life it might come down to us humans to decide how much we want to rely on AI. It seems weird, as an individual choice, to allow to be replaced by AI. But then, as we know, what is individually irrational, might be collectively rational (or vice versa). (Dune comes to mind…)

  6. MFS

    I was recently introduced to a new crop of AI research tools (see list below) that can generate surprisingly competent literature reviews. In just a few minutes, and with relatively little expertise, I was able to put together plausible-seeming surveys of literatures I knew very little about. The papers were certainly bland (“insights from literature A applied to problem B”), but writing them myself would have taken months of research and reading. Most importantly, because these tools are directly integrated into existing research databases, they direct the reader to real sources that are actually relevant to the paper (whether the sources actually say what the research tools claim they do was not always clear, however).
    I expect that young scholars (e.g., current grad students and post-docs) will learn to use these tools to massively increase their research output: lots of papers exhibiting a mastery of lots of different literatures. Such use wont constitute “AI Researchers,” but questions about authorship in such cases seem pretty ambiguous to me. In any case, I worry about whether journals are prepared for the increase in submissions that is likely to result from the deployment of these tools.
    Ithaka S+R’s Generative AI Product Tracker: Discovery Tools
    https://docs.google.com/document/d/1yg7KJmMl7d_xZAGgHiXc-9iSNT5vmmp1iyK5zYcS2IE/edit?tab=t.0#heading=h.3kg5cx3cjplv

  7. ChastenedAuthor

    I think, for the foreseeable future, we’re most likely to see AI do things like lit reviews that MFS discusses above.
    Also, I think AI can be used as a robustness tool for different methods, so a researcher may not necessarily have to be an expert in a particular methodology in the future.
    I think there’s a use for AI as a baseline reviewer with hopes of doing away with wildly inaccurate or mean reviews, but AI can’t be trusted to judge the quality of philosophy yet.
    As an example, you can get AI to support logical contradictions, because AI’s job is to defer to human inputs and help the user, not impose its own “judgment”.

Leave a Reply

Discover more from The Philosophers' Cocoon

Subscribe now to keep reading and get access to the full archive.

Continue reading