For Humanity: An AI Risk Podcast

We’re Racing Toward AI We Can’t Control | For Humanity #79

February 14, 2026·1h 9m
Episode Description from the Publisher

In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.Together, they explore:* Why AI extinction risk is real* Why research alone won’t save us* The dangers of the AI chip supply chain race* Job displacement and political blind spots* Alignment skepticism* Whether treaties can work* What gives David hope in 2026If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

AI Summary coming soon

Sign up to get notified when the full AI-powered summary is ready.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.

Listen to This Episode

Get summaries like this every morning.

Free AI-powered recaps of For Humanity: An AI Risk Podcast and your other favorite podcasts, delivered to your inbox.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.