TI
The Intelligence Horizon

Thomas Larsen (AI 2027): We have to start preparing for AGI

November 16, 2025·1h 37m
Episode Description from the Publisher

In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.Check out "AI 2027" here: https://ai-2027.comLearn more about the AI Futures Project here: https://ai-futures.orgFollow the rest of The Intelligence Horizon!Instagram: @theintelligencehorizonTikTok: @theintelligencehorizonSpotify: The Intelligence HorizonLinkedIn: The Intelligence HorizonFeel free to also reach out at theintelligencehorizon@gmail.comCo-hosts: Owen Zhang and Will Sanok DufalloVideo Producer: Kaitlyn SmithSocial Media Manager: Nancy Javkhlan

AI Summary coming soon

Sign up to get notified when the full AI-powered summary is ready.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.

Listen to This Episode

Get summaries like this every morning.

Free AI-powered recaps of The Intelligence Horizon and your other favorite podcasts, delivered to your inbox.

Get Free Summaries →

Free forever for up to 3 podcasts. No credit card required.