57 episodes

Gradient Dissent is a machine learning podcast hosted by Lukas Biewald that takes you behind-the-scenes to learn how industry leaders are putting deep learning models in production at Facebook, Google, Lyft, OpenAI, and more.

Gradient Dissent Lukas Biewald

    • Technology
    • 4.6 • 28 Ratings

Gradient Dissent is a machine learning podcast hosted by Lukas Biewald that takes you behind-the-scenes to learn how industry leaders are putting deep learning models in production at Facebook, Google, Lyft, OpenAI, and more.

    Pete Warden — Practical Applications of TinyML

    Pete Warden — Practical Applications of TinyML

    Pete is the Technical Lead of the TensorFlow Micro team, which works on deep learning for mobile and embedded devices.
    Lukas and Pete talk about hacking a Raspberry Pi to run AlexNet, the power and size constraints of embedded devices, and techniques to reduce model size. Pete also explains real world applications of TensorFlow Lite Micro and shares what it's been like to work on TensorFlow from the beginning.
    The complete show notes (transcript and links) can be found here: http://wandb.me/gd-pete-warden
    ---
    Connect with Pete:
    📍 Twitter: https://twitter.com/petewarden
    📍 Website: https://petewarden.com/
    ---
    Timestamps:
    0:00 Intro
    1:23 Hacking a Raspberry Pi to run neural nets
    13:50 Model and hardware architectures
    18:56 Training a magic wand
    21:47 Raspberry Pi vs Arduino
    27:51 Reducing model size
    33:29 Training on the edge
    39:47 What it's like to work on TensorFlow
    47:45 Improving datasets and model deployment
    53:05 Outro
    ---
    Subscribe and listen to our podcast today!
    👉 Apple Podcasts: http://wandb.me/apple-podcasts​​
    👉 Google Podcasts: http://wandb.me/google-podcasts​
    👉 Spotify: http://wandb.me/spotify​

    • 53 min
    Pieter Abbeel — Robotics, Startups, and Robotics Startups

    Pieter Abbeel — Robotics, Startups, and Robotics Startups

    Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in its mission to create conscious computers, mindful machines, and rational robots.
    Lukas and Pieter explore the state of affairs of robotics in 2021, the challenges of achieving consistency and reliability, and what it'll take to make robotics more ubiquitous. Pieter also shares some perspective on entrepreneurship, from how he knew it was time to commercialize Gradescope to what he looks for in co-founders to why he started Covariant.
    Show notes: http://wandb.me/gd-pieter-abbeel
    ---
    Connect with Pieter:
    📍 Twitter: https://twitter.com/pabbeel
    📍 Website: https://people.eecs.berkeley.edu/~pabbeel/
    📍 The Robot Brains Podcast: https://www.therobotbrains.ai/
    ---
    Timestamps:
    0:00 Intro
    1:15 The challenges of robotics
    8:10 Progress in robotics
    13:34 Imitation learning and reinforcement learning
    21:37 Simulated data, real data, and reliability
    27:53 The increasing capabilities of robotics
    36:23 Entrepreneurship and co-founding Gradescope
    44:35 The story behind Covariant
    47:50 Pieter's communication tips
    52:13 What Pieter's currently excited about
    55:08 Focusing on good UI and high reliability
    57:01 Outro

    • 57 min
    Chris Albon — ML Models and Infrastructure at Wikimedia

    Chris Albon — ML Models and Infrastructure at Wikimedia

    In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation.
    Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparent that even internal chats are public, how Wikimedia uses machine learning (spoiler: they do a lot of models to help editors), and why they're switching to Kubeflow and Docker. Chris also shares how his focus on outcomes has shaped his career and his approach to technical interviews.
    Show notes: http://wandb.me/gd-chris-albon
    ---
    Connect with Chris:
    - Twitter: https://twitter.com/chrisalbon
    - Website: https://chrisalbon.com/
    ---
    Timestamps:
    0:00 Intro
    1:08 How Wikimedia approaches moderation
    9:55 Working in the open and embracing humility
    16:08 Going down Wikipedia rabbit holes
    20:03 How Wikimedia uses machine learning
    27:38 Wikimedia's ML infrastructure
    42:56 How Chris got into machine learning
    46:43 Machine Learning Flashcards and technical interviews
    52:10 Low-power models and MLOps
    55:58 Outro

    • 56 min
    Emily M. Bender — Language Models and Linguistics

    Emily M. Bender — Language Models and Linguistics

    In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.
    Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender
    ---
    Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.
    ---
    Timestamps:
    0:00 Sneak peek, intro
    1:03 Stochastic Parrots
    9:57 The societal impact of big language models
    16:49 How language models can be harmful
    26:00 The important difference between linguistic form and meaning
    34:40 The octopus thought experiment
    42:11 Language acquisition and the future of language models
    49:47 Why benchmarks are limited
    54:38 Ways of complementing benchmarks
    1:01:20 The #BenderRule
    1:03:50 Language diversity and linguistics
    1:12:49 Outro

    • 1 hr 12 min
    Jeff Hammerbacher — From data science to biomedicine

    Jeff Hammerbacher — From data science to biomedicine

    Jeff talks about building Facebook's early data team, founding Cloudera, and transitioning into biomedicine with Hammer Lab and Related Sciences.

    • 56 min
    Josh Bloom, Chair of Astronomy at UC Berkeley — The Link Between Astronomy and ML

    Josh Bloom, Chair of Astronomy at UC Berkeley — The Link Between Astronomy and ML

    Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here.

    • 1 hr 8 min

Customer Reviews

4.6 out of 5
28 Ratings

28 Ratings

bryanbischof ,

Refreshingly realistic conversation about AI

Every episode is clear, honest conversation about ML and DL. No weird hyping or posturing; just strong content in understandable language.

humblylearning ,

Always learning new things

Where rubber meets the road. That is what this podcast is all about. It shares very notable and relevant ML/AI real life situations with many lessons learned.

bulbul ahmmed ,

Great host and guests

I like the way Lukas host his podcasts. Also, he invites guests who are best in the industry.

Top Podcasts In Technology

You Might Also Like