The Philosophers' Cocoon
A safe and supportive forum for philosophers.
Owned & Moderated by Marcus Arvan (University of Tampa)
Contact: marvan@ut.edu
Blog mission & moderation policy
recent posts
- Leaving an M.A. off your CV due to potential bias against the institution?
- An opinion on APA’s decision to terminate its 2+1 virtual pilot early – by Kino Zhao
- Tips for students and/or early career folk who face a mental block in writing papers?
- Does time under review predict particular editorial decisions?
- Does is matter how recent a job applicant’s teaching experience is?
about
Category: Artificial Intelligence
-
In our new "how can we help you?" thread, a reader asks: What AI tools have you found most useful for doing philosophy? What philosophy-related tasks have you found them most useful for? (I am asking because I use very few AI tools but want to start using more.) Do any readers have any helpful…
-
This is just a quick note that I have a new op-ed out at Scientific American, "AI Is Too Unpredictable to Behave According to Human Goals." Hope some of you find it of interest!
-
In our most recent "how can we help you?" thread, a reader asks: I've received a referee report from a journal that looks very much like it was written using AI. (Five short paragraphs structured like an undergrad essay, vague and general, slightly contradictory—it says my arguments are compelling but there's room for rebuttal.) Should…
-
Ever wonder why large language model AI (LLMs) keep threatening people or otherwise violating their 'guardrails' despite vast resources being spent on AI safety research? My new paper, "‘Interpretability’ and ‘Alignment’ are Fool’s Errands: A Proof that Controlling Misaligned Large Language Models is the Best Anyone Can Hope For" shows that it is because AI…