47 min

Serg Masis on interpretable machine learning, process fairness vs statistical fairness, how to measure interpretability, how to interpret neural networks, how to increase the interpretability of a model Infinite Machine Learning: Artificial Intelligence | Startups | Technology

    • Technology

Serg Masis is a Data Scientist in agriculture with a background in entrepreneurship and web/app development. He's the author of the book "Interpretable Machine Learning with Python". In addition to ML interpretability, he's passionate about explainable AI, behavioral economics, and ethical AI.

In this episode, we cover a range of topics including:
- How did he get into machine learning?
- What is interpretable ML?
- What is post hoc interpretability?
- Process fairness vs statistical fairness
- How an algorithm creates the model?
- How a model makes predictions?
- What makes an ML model interpretable?
- How do you measure the interpretability of a model?
- How do parts of the model affect predictions?
- Does the method of interpretation depend on the model? Or can we apply a given method to a number of models?
- Can you explain a specific prediction from a model?
- What techniques can we use to interpret neural networks?
- What techniques are available to increase the interpretability of a model? 

Serg Masis is a Data Scientist in agriculture with a background in entrepreneurship and web/app development. He's the author of the book "Interpretable Machine Learning with Python". In addition to ML interpretability, he's passionate about explainable AI, behavioral economics, and ethical AI.

In this episode, we cover a range of topics including:
- How did he get into machine learning?
- What is interpretable ML?
- What is post hoc interpretability?
- Process fairness vs statistical fairness
- How an algorithm creates the model?
- How a model makes predictions?
- What makes an ML model interpretable?
- How do you measure the interpretability of a model?
- How do parts of the model affect predictions?
- Does the method of interpretation depend on the model? Or can we apply a given method to a number of models?
- Can you explain a specific prediction from a model?
- What techniques can we use to interpret neural networks?
- What techniques are available to increase the interpretability of a model? 

47 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
The Neuron: AI Explained
The Neuron
TED Radio Hour
NPR
Dwarkesh Podcast
Dwarkesh Patel