26 episodes

A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.

Vanishing Gradients Hugo Bowne-Anderson

    • Technology

A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.

    Episode 26: Developing and Training LLMs From Scratch

    Episode 26: Developing and Training LLMs From Scratch

    Hugo speaks with Sebastian Raschka, a machine learning & AI researcher, programmer, and author. As Staff Research Engineer at Lightning AI, he focuses on the intersection of AI research, software development, and large language models (LLMs).


    How do you build LLMs? How can you use them, both in prototype and production settings? What are the building blocks you need to know about?


    ​In this episode, we’ll tell you everything you need to know about LLMs, but were too afraid to ask: from covering the entire LLM lifecycle, what type of skills you need to work with them, what type of resources and hardware, prompt engineering vs fine-tuning vs RAG, how to build an LLM from scratch, and much more.


    The idea here is not that you’ll need to use an LLM you’ve built from scratch, but that we’ll learn a lot about LLMs and how to use them in the process.


    Near the end we also did some live coding to fine-tune GPT-2 in order to create a spam classifier!


    LINKS



    The livestream on YouTube
    Sebastian's website
    Machine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI by Sebastian
    Build a Large Language Model (From Scratch) by Sebastian
    PyTorch Lightning
    Lightning Fabric
    LitGPT
    Sebastian's notebook for finetuning GPT-2 for spam classification!
    The end of fine-tuning: Jeremy Howard on the Latent Space Podcast
    Our next livestream: How to Build Terrible AI Systems with Jason Liu
    Vanishing Gradients on Twitter
    Hugo on Twitter

    • 1 hr 51 min
    Episode 25: Fully Reproducible ML & AI Workflows

    Episode 25: Fully Reproducible ML & AI Workflows

    Hugo speaks with Omoju Miller, a machine learning guru and founder and CEO of Fimio, where she is building 21st century dev tooling. In the past, she was Technical Advisor to the CEO at GitHub, spent time co-leading non-profit investment in Computer Science Education for Google, and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows.


    We need open tools, open data, provenance, and the ability to build fully reproducible, transparent machine learning workflows. With the advent of closed-source, vendor-based APIs and compute becoming a form of gate-keeping, developer tools are at the risk of becoming commoditized and developers becoming consumers.


    We’ll talk about how ideas for escaping these burgeoning walled gardens. We’ll dive into



    What fully reproducible ML workflows would look like, including git for the workflow build process,
    The need for loosely coupled and composable tools that embrace a UNIX-like philosophy,
    What a much more scientific toolchain would look like,
    What a future open sources commons for Generative AI could look like,
    What an open compute ecosystem could look like,
    How to create LLMs and tooling so everyone can use them to build production-ready apps,


    And much more!


    LINKS



    The livestream on YouTube
    Omoju on Twitter
    Hugo on Twitter
    Vanishing Gradients on Twitter
    Lu.ma Calendar that includes details of Hugo's European Tour for Outerbounds
    Blog post that includes details of Hugo's European Tour for Outerbounds

    • 1 hr 20 min
    Episode 24: LLM and GenAI Accessibility

    Episode 24: LLM and GenAI Accessibility

    Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities.


    Johno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!”


    We talk about where we’ve come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we’re going. We’ll delve into



    What the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets;
    How fast.ai democratized access to deep learning, what successes they had, and what was learned;
    The moving parts now required to make GenAI and ML as accessible as possible;
    The importance of focusing on UX and the application in the world of generative AI and foundation models;
    The skillset and toolkit needed to be an LLM and AI guru;
    What they’re up to at answer.ai to democratize LLMs and foundation models.


    LINKS



    The livestream on YouTube
    Zindi, the largest professional network for data scientists in Africa
    A new old kind of R&D lab: Announcing Answer.AI
    Why and how I’m shifting focus to LLMs by Johno Whitaker
    Applying AI to Immune Cell Networks by Rachel Thomas
    Replicate -- a cool place to explore GenAI models, among other things
    Hands-On Generative AI with Transformers and Diffusion Models
    Johno on Twitter
    Hugo on Twitter
    Vanishing Gradients on Twitter
    SciPy 2024 CFP
    Escaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream

    • 1 hr 30 min
    Episode 23: Statistical and Algorithmic Thinking in the AI Age

    Episode 23: Statistical and Algorithmic Thinking in the AI Age

    Hugo speaks with Allen Downey, a curriculum designer at Brilliant, Professor Emeritus at Olin College, and the author of Think Python, Think Bayes, Think Stats, and other computer science and data science books. In 2019-20 he was a Visiting Professor at Harvard University. He previously taught at Wellesley College and Colby College and was a Visiting Scientist at Google. He is also the author of the upcoming book Probably Overthinking It!


    They discuss Allen's new book and the key statistical and data skills we all need to navigate an increasingly data-driven and algorithmic world. The goal was to dive deep into the statistical paradoxes and fallacies that get in the way of using data to make informed decisions.


    For example, when it was reported in 2021 that “in the United Kingdom, 70-plus percent of the people who die now from COVID are fully vaccinated,” this was correct but the implication was entirely wrong. Their conversation jumps into many such concrete examples to get to the bottom of using data for more than “lies, damned lies, and statistics.” They cover



    Information and misinformation around pandemics and the base rate fallacy;
    The tools we need to comprehend the small probabilities of high-risk events such as stock market crashes, earthquakes, and more;
    The many definitions of algorithmic fairness, why they can't all be met at once, and what we can do about it;
    Public health, the need for robust causal inference, and variations on Berkson’s paradox, such as the low-birthweight paradox: an influential paper found that that the mortality rate for children of smokers is lower for low-birthweight babies;
    Why none of us are normal in any sense of the word, both in physical and psychological measurements;
    The Inspection paradox, which shows up in the criminal justice system and distorts our perception of prison sentences and the risk of repeat offenders.


    LINKS



    The livestream on YouTube
    Allen Downey on Github
    Allen's new book Probably Overthinking It!
    Allen on Twitter
    Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions by Mitchell et al.

    • 1 hr 20 min
    Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering

    Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering

    Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.


    Jeremy Howard is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. Shreya Shankar is at UC Berkeley, ex Google brain, Facebook, and Viaduct. Hamel Husain has his own generative AI and LLM consultancy Parlance Labs and was previously at Outerbounds, Github, and Airbnb.


    They talk about



    How LLMs shift the nature of the work we do in DS and ML,
    How they change the tools we use,
    The ways in which they could displace the role of traditional ML (e.g. will we stop using xgboost any time soon?),
    How to navigate all the new tools and techniques,
    The trade-offs between open and closed models,
    Reactions to the recent Open Developer Day and the increasing existential crisis for ML.


    LINKS



    The panel on YouTube
    Hugo and Jeremy's upcoming livestream on what the hell happened recently at OpenAI, among many other things
    Vanishing Gradients on YouTube
    Vanishing Gradients on twitter

    • 1 hr 20 min
    Episode 21: Deploying LLMs in Production: Lessons Learned

    Episode 21: Deploying LLMs in Production: Lessons Learned

    Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.


    They talk about generative AI, large language models, the business value they can generate, and how to get started.


    They delve into



    Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
    Common misconceptions about LLMs;
    The skills you need to work with LLMs and GenAI models;
    Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
    Vendor APIs vs OSS models.


    LINKS



    Our upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!
    Our recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.
    Extended Guide: Instruction-tune Llama 2 by Philipp Schmid
    The livestream recoding of this episode!
    Hamel on twitter

    • 1 hr 8 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
FT Tech Tonic
Financial Times
Lenny's Podcast: Product | Growth | Career
Lenny Rachitsky
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times

You Might Also Like

Dwarkesh Podcast
Dwarkesh Patel
Practical AI: Machine Learning, Data Science
Changelog Media
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Techmeme Ride Home
Ride Home Media