
In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.Together, they explore:* Why AI extinction risk is real* Why research alone won’t save us* The dangers of the AI chip supply chain race* Job displacement and political blind spots* Alignment skepticism* Whether treaties can work* What gives David hope in 2026If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.

She Spent 12 Years Fighting Amazon. Now She Wants to Cut the Power to AI.

The Filmmaker Who Sat Across From Sam Altman - And Walked Away With Nothing

How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82

We Debated the Future of AI Safety in Brussels — Here's What Happened
Free AI-powered recaps of For Humanity: An AI Risk Podcast and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.