In January 2025, Chinese AI lab DeepSeek released DeepSeek R1—a model matching GPT-4 class performance at a fraction of the training cost. It wiped $600 billion off NVIDIA's market cap in a single day. Twelve months later, the ripple effects are still reshaping the AI industry. This episode cuts through the "China beats America" headlines to explain the actual technical and economic implications. DeepSeek R1 benchmarked comparably to OpenAI's O1 on reasoning tasks. The shock wasn't performance—it was cost. DeepSeek claimed under $6 million in training costs versus hundreds of millions for comparable Western models. What changed: The assumption that massive compute spending creates an insurmountable moat for frontier AI models was proven wrong. Smaller labs with less funding can now compete effectively. This turbocharged efficiency research across all AI labs globally. The DeepSeek moment was a genuine inflection point—not because China won an AI race, but because it proved the rules of competition differ from industry assumptions. Efficiency matters as much as scale. Open weights change deployment strategies. The global AI ecosystem is multipolar in ways it wasn't two years ago. Essential listening for data scientists tracking model economics, ML engineers exploring efficiency techniques, and tech leaders navigating AI geopolitics and competitive strategy.
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.
EP 38: The Local AI Stack Nobody Talks About (But Should)
EP 37: Neurons: Future of AI Processing
EP 36: NVIDIA GTC 2026: Everything That Matters - Recapped
EP 35: Who Actually Controls AI? The Governance Gap Explained
Free AI-powered recaps of Data Science With Sam and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.