In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.Check out "AI 2027" here: https://ai-2027.comLearn more about the AI Futures Project here: https://ai-futures.orgFollow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonFeel free to also reach out at theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloVideo Producer: Kaitlyn SmithSocial Media Manager: Nancy Javkhlan
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.
Nathan Labenz: Why Transformative AI is Coming With or Without New Breakthroughs.
Zoë Hitzig Left OpenAI. Here’s What She Told Us Weeks Before.
Thomas Woodside (Secure AI Project): What SB 53 Actually Does and What Comes Next in AI Policy
Free AI-powered recaps of The Intelligence Horizon and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.