Agents don’t need bigger models. They need better tools.Morph trains coding subagents.Not for humans. For frontier models.Fast Apply edits at 10,000 tokens/sec. WarpGrep handles code and log search.Both keep the main model’s context cleanBecause when context gets too large, performance drops.Now Morph is pushing coding subagents even faster.One newer model runs at 33,000 tokens/sec: https://docs.morphllm.com/sdk/components/compact🎙️ Tejas Bhakta, Founder & CEO, Morph01:30 Fast Apply + WarpGrep02:26 Context fills up around 100k02:38 Keep the main model context clean03:31 “You can’t scale human attention 100x”07:28 Founders are missing how to value human attention08:30 New model at 33,000 tokens/sec10:08 Better, faster, and cheaper than the frontier
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.
🎧 START pod: Natalie Aresta-Katz, Cofounder & CEO, Regbase: “Tracking global laws, grants and consultations”
🎧 START pod: Adil Mania, Founder & Host, Silicon Mania: “Make Tech Fun Again”
🎧 START pod: Liam Karlsson & William Gyltman, Co-Founders of Rankad.ai "Turn AI visibility into revenue. On autopilot."
🎧 START pod: Matthew Chen, Founder & CEO, Laurence "Autonomous performance marketing"
Free AI-powered recaps of START and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.