In the 30th episode we review the the bootstrap, method which was introduced by Bradley Efron in 1979, is a non-parametric resampling technique that approximates a statistic’s sampling distribution by repeatedly drawing with replacement from the observed data, allowing estimation of standard errors, confidence intervals, and bias without relying on strong distributional assumptions. Its ability to quantify uncertainty cheaply and flexibly underlies many staples of modern data science and AI, powering model evaluation and feature stability analysis, inspiring ensemble methods like bagging and random forests, and informing uncertainty calibration for deep-learning predictions—thereby making contemporary models more reliable and robust.Efron, B. "Bootstrap methods: Another look at the bootstrap." The Annals of Statistics 7 (1977): 1-26.
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.
Data Science #34 - The deep learning original paper review, Hinton, Rumelhard & Williams (1985)
Data Science #33 - The Backpropagation method, Paul Werbos (1980)
Data Science #32 - A Markovian Decision Process, Richard Bellman (1957)
Data Science #31 - Correlation and causation (1921), Wright Sewall
Free AI-powered recaps of Data Science Decoded and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.