
(This was originally going to be a "quick take" but then it got a bit long. Just FYI.) There's this weird trend I perceive with the personas of LLM assistants over time. It feels like they're getting less "coherent" in a certain sense, even as the models get more capable. When I read samples from older chat-tuned models, it's striking how "mode-collapsed" they feel relative to recent models like Claude Opus 4.6 or GPT-5.4.[1] This is most straightforwardly obvious when it comes to text...
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.

"Intelligence Dissolves Privacy" by Vaniver

"How Go Players Disempower Themselves to AI" by Ashe Vazquez Nuñez

"On today’s panel with Bernie Sanders" by David Scott Krueger

"Not a Paper: “Frontier Lab CEOs are Capable of In-Context Scheming”" by LawrenceC
Free AI-powered recaps of LessWrong (Curated & Popular) and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.