
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian’s perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch). The complete show notes for this episode can be found at https://twimlai.com/go/762.
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.

How to Engineer AI Inference Systems with Philip Kiely - #766

How Capital One Delivers Multi-Agent Systems with Rashmi Shetty - #765

The Race to Production-Grade Diffusion LLMs with Stefano Ermon - #764

Agent Swarms and Knowledge Graphs for Autonomous Software Development with Siddhant Pardeshi - #763
Free AI-powered recaps of The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.