CC
Conspicuous Cognition Podcast

Should We Care About AI Welfare? (with Robert Long)

April 18, 2026·1h 28m
Episode Description from the Publisher

Almost all of the discussion about the risks associated with AI focuses on the dangers that increasingly advanced AI systems pose to us — to humanity. But what about the dangers that we might pose to them? As these systems become increasingly intelligent and agentic, AI companies, policy makers, and ordinary citizens need to start taking the possibility of AI consciousness and welfare seriously. If we are in the process of bringing complex and sophisticated minds into existence, how should we understand and treat such minds?In this episode, Henry and I discuss these issues with Robert Long, founder and executive director of Eleos AI, a research nonprofit dedicated to understanding and addressing the potential wellbeing and “moral patienthood” of AI systems. Rob did his PhD in philosophy at NYU under David Chalmers, and is the co-author of two of the most important papers in the emerging field of AI welfare: “Consciousness in Artificial Intelligence” and “Taking AI Welfare Seriously”.This was a really fun, informative, and wide-ranging conversation. Among other topics, we discussed:* Why Rob disagrees with previous guest Anil Seth in taking the possibility of AI consciousness very seriously.* Why “fancy autocomplete” dismissals of large language models miss the point, and what, if anything, we can learn about an AI model’s experiences by talking to it.* The difference between consciousness and the kinds of motivations and interests that might actually ground moral status, and whether AI systems could have one without the other.* What Rob found when he conducted the first externally-commissioned welfare evaluation of a frontier AI model, Claude, and why Claude appears to have an inflated self-conception of what it wants.* Rob’s experiments with Claude Mythos, an AI model so advanced it hasn’t been released to the public yet. * Why the fact that Anthropic writes Claude’s character arguably doesn’t settle whether Claude has genuine preferences and values — and the difficult philosophical questions this throws up.* The “willing servitude” problem: if we succeed in building AI systems that genuinely love being helpful, is that a good outcome or a horrifying one?* How AI welfare connects to AI safety, and why caring about model wellbeing may turn out to be pragmatically important for alignment even if you’re skeptical about AI consciousness.* Why AI welfare is already becoming a political and legal battleground. * Practical advice for users: whether it’s worth being polite to your chatbot, and what low-cost things you can do if you want to hedge against the possibility that these systems might matter morally.* Whether discourse about AI consciousness functions as hype or propaganda for AI companies, and why Rob thinks AI companies actually have an incentive to downplay AI consciousness. Links and further reading* Eleos AI Research — Rob’s nonprofit. Home to their research agenda, team page, and blog. If you want to follow the institutional effort on AI welfare, start here. They’re also, as Rob mentioned in the episode, actively fundraising and hiring.* “Taking AI Welfare Seriously” (Long, Sebo, Butlin et al., 2024) — the flagship report, co-authored with Jeff Sebo, David Chalmers, Jonathan Birch, and others. Argues that there’s a realistic near-future possibility of conscious or robustly agentic AI systems, and lays out concrete steps AI companies should be taking now.* “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” (Butlin, Long et al., 2023) — the “indicators” paper referenced several times in the episode. Surveys leading neuroscientific theories of consciousness and derives computational properties you’d look for in an AI system. S* Rob’s Substack, Experience Machines — where Rob writes more informally. The piece we discussed in the episode, “Language models are different from humans, and that’s okay,” is a good entry point, as is

AI Summary coming soon

Sign up to get notified when the full AI-powered summary is ready.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.

Listen to This Episode

Get summaries like this every morning.

Free AI-powered recaps of Conspicuous Cognition Podcast and your other favorite podcasts, delivered to your inbox.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.