Beyond The Prompt - How to use AI in your company

Nobody Is Getting New Manager Training for Their AI Team - with Dan Klein, UC Berkeley

April 15, 2026·1h 3m
Episode Description from the Publisher

Dan Klein, professor at UC Berkeley and CTO at Scaled Cognition, explains that AI systems generate answers based on patterns in language rather than verified knowledge. This makes them highly capable across many tasks, but also means they can produce confident answers even when they are not fully accurate. He introduces the “jagged frontier,” where AI performs very well in some areas and less reliably in others. Because responses are fluent and convincing, it is often hard to see where those limits are, which makes it important to stay engaged when using these systems. The conversation also explores hallucinations as a natural part of generative systems. In some cases, this is what makes them valuable, especially for creative or open-ended tasks, while in other cases reliability becomes more important. Finally, Dan highlights that working effectively with AI is a skill. As more people start using these systems in their daily work, knowing how to guide them, evaluate outputs, and apply them in the right contexts becomes increasingly important. He also shares how his team at Scaled Cognition is tackling this challenge by building AI systems with fundamentally different architectures, focused on determinism and reliability — aiming to ensure systems follow rules, reflect underlying data accurately, and behave predictably in high-stakes, policy-driven use cases. Key Takeaways: AI is designed to sound right, not to know it’s right Models generate fluent answers without knowing whether they are correct, which means users need to actively evaluate outputs You have to learn where AI works and where it doesn’t Capabilities are uneven, and understanding those limits is key to using AI effectively Working with AI shifts your role from creator to editor Instead of starting from scratch, you are reviewing, refining, and validating what the model produces Most people are using AI without knowing how to manage it Skills like delegation, verification, and judgment are becoming essential, but are not widely taught Dan's LinkedIn: linkedin/dan-klein/ Scaled Cognition Website: scaledcognition.com Scaled Cognition LinkedIn: linkedin/company/scaledcognition/ Scaled Cognition X: x.com/ScaledCognition 00:00 Intro: Fluency vs Truth00:34 Meet Dan Klein02:53 Why Fluency Misleads05:11 How LLMs Guess07:30 What Is Hallucination08:54 Deception and Alignment11:22 Why Agents Break12:48 Chaining and Determinism16:01 When Hallucination Helps22:33 Beyond Scale for Reliability30:40 Synthetic Data Training31:10 Enterprise Agent Use Cases33:44 Healthcare Risks39:13 Enterprise Literacy Gap41:27 Delegation and AI Management54:37 The Debrief 📜 Read the transcript for this episode: nobody-is-getting-new-manager-training-for-their-ai-team-with-dan-klein-uc-berkeley/transcript  For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:Henrik: https://www.linkedin.com/in/werdelinJeremy: https://www.linkedin.com/in/jeremyutley Show edited by Emma Cecilie Jensen. 

AI Summary coming soon

Sign up to get notified when the full AI-powered summary is ready.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.

Listen to This Episode

Get summaries like this every morning.

Free AI-powered recaps of Beyond The Prompt - How to use AI in your company and your other favorite podcasts, delivered to your inbox.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.