Data Science Decoded

Mike E

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs

  1. 6 GIỜ TRƯỚC

    Data Science #34 - The deep learning original paper review, Hinton, Rumelhard & Williams (1985)

    On the 34th episode, we review the 1986 paper, "Learning representations by back-propagating errors" , which was pivotal because it provided a clear, generalized framework for training neural networks with internal 'hidden' units. The core of the procedure, back-propagation, repeatedly adjusts the weights of connections in the network to minimize the error between the actual and desired output vectors. Crucially, this process forces the hidden units, whose desired states aren't specified, to develop distributed internal representations of the task domain's important features.This capability to construct useful new features distinguishes back-propagation from earlier, simpler methods like the perceptron-convergence procedure. The authors demonstrate its power on non-trivial problems, such as detecting mirror symmetry in an input vector and storing information about isomorphic family trees. By showing how the network generalizes correctly from one family tree to its Italian equivalent, the paper illustrated the algorithm's ability to capture the underlying structure of the task domain.Despite recognizing that the procedure was not guaranteed to find a global minimum due to local minima in the error-surface , the paper's clear formulation (using equations 1-9 ) and its successful demonstration of learning complex, non-linear representations served as a powerful catalyst. It fundamentally advanced the field of connectionism and became the standard, foundational algorithm used today to train multi-layered networks, or deep learning models, despite the earlier, lesser-known work by Werbos

    47 phút
  2. 23 THG 5

    Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)

    In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power. Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroni‐corrected thresholds), allows multi-way divisions, and handles missing or “floating” categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting. Through its significance‐driven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAID’s core ideas underpin contemporary decision‐tree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesis‐driven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.

    41 phút

Xếp Hạng & Nhận Xét

3,8
/5
5 Xếp hạng

Giới Thiệu

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs

Có Thể Bạn Cũng Thích