Data Science Decoded

Mike E

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs

  1. MAY 23

    Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)

    In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power. Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroni‐corrected thresholds), allows multi-way divisions, and handles missing or “floating” categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting. Through its significance‐driven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAID’s core ideas underpin contemporary decision‐tree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesis‐driven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.

    41 min
  2. FEB 4

    Data Science #24 - The Expectation Maximization (EM) algorithm Paper review (1977)

    At the 24th episode we go over the paper titled:Dempster, Arthur P., Nan M. Laird, and Donald B. Rubin. "Maximum likelihood from incomplete data via the EM algorithm." Journal of the royal statistical society: series B (methodological) 39.1 (1977): 1-22.The Expectation-Maximization (EM) algorithm is an iterative method for finding Maximum Likelihood Estimates (MLEs) when data is incomplete or contains latent variables. It alternates between the E-step, where it computes the expected value of the missing data given current parameter estimates, and the M-step, where it maximizes the expected complete-data log-likelihood to update the parameters. This process repeats until convergence, ensuring a monotonic increase in the likelihood function.EM is widely used in statistics and machine learning, especially in Gaussian Mixture Models (GMMs), hidden Markov models (HMMs), and missing data imputation. Its ability to handle incomplete data makes it invaluable for problems in clustering, anomaly detection, and probabilistic modeling. The algorithm guarantees stable convergence, though it may reach local maxima, depending on initialization.In modern data science and AI, EM has had a profound impact, enabling unsupervised learning in natural language processing (NLP), computer vision, and speech recognition. It serves as a foundation for probabilistic graphical models like Bayesian networks and Variational Inference, which power applications such as chatbots, recommendation systems, and deep generative models. Its iterative nature has also inspired optimization techniques in deep learning, such as Expectation-Maximization inspired variational autoencoders (VAEs), demonstrating its ongoing influence in AI advancements.

    33 min

Ratings & Reviews

3.5
out of 5
4 Ratings

About

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs

You Might Also Like