300 episodes

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

Data Skeptic Kyle Polich

    • Science
    • 4.5, 428 Ratings

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

    Black Boxes Are Not Required

    Black Boxes Are Not Required

    Deep neural networks are undeniably effective. They rely on such a high number of parameters, that htey are appropriately described as “black boxes”.
    While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.
    But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist?
    Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…
    Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition

    • 32 min
    Robustness to Unforeseen Adversarial Attacks

    Robustness to Unforeseen Adversarial Attacks

    Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

    • 21 min
    Estimating the Size of Language Acquisition

    Estimating the Size of Language Acquisition

    Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition

    • 25 min
    Interpretable AI in Healthcare

    Interpretable AI in Healthcare

    Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

    • 35 min
    Understanding Neural Networks

    Understanding Neural Networks

    What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

    • 34 min
    Self-Explaining AI

    Self-Explaining AI

    Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user.
    We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.

    • 32 min

Customer Reviews

4.5 out of 5
428 Ratings

428 Ratings

Peaceful bird ,

Good. Could be great.

This show is very good, but would be great if the co-host was more science-minded. As it stands, the mini episodes consist of Kyle explaining technical concepts to Linh Da, who is intended to be the layperson and prevent Kyle from getting too jargon-y. She is effective in that capacity.

However, quite a bit of time gets wasted with arguments that would mostly not occur if Kyle were speaking to, say, a trained biologist, or even an attorney. Because it gets pretty annoying, I have to keep my listening of the show fairly sparse.

Great show overall, and the deeper dives with guests are killer.

prime_player ,

Very Informative Enjoyable To Listen To

I enjoy the topic and discussion Kyle generates. Very interesting material and interviews. Look forward to each episode.

Artemis_2 ,

great podcast

great podcast for introducing recent important works in machine learning

Top Podcasts In Science

Listeners Also Subscribed To