26 episodes

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

The Retort AI Podcast Thomas Krendl Gilbert and Nathan Lambert

    • Technology
    • 4.8 • 8 Ratings

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

    Llama 3: Can't Compete with a Capuchin

    Llama 3: Can't Compete with a Capuchin

    Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. 
    Links:Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey 
    00:00 Introductions & advice from a wolf00:45 Llama 307:15 Resources and investment required for large language models14:10 What it means to be a leader in the rapidly evolving AI landscape22:07 How much of AI progress is driven by stories vs resources29:41 Critiquing the concept of Artificial General Intelligence (AGI)38:10 Misappropriation of the term AGI by tech leaders42:09 The future of open models and AI development

    • 46 min
    Into the AI Trough of Disillusionment

    Into the AI Trough of Disillusionment

    Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.
    00:00 Introduction 01:16 Recent developments in open model releases 04:21 Tom's experience viewing the total solar eclipse09:38 The Three-Body Problem book and Netflix14:06 The Gartner Hype Cycle22:51 Infrastructure constraints on scaling AI28:47 Metaphors and narratives around AI risk34:43 Rethinking AI risk as public health problems37:37 The "one-way door" nature of releasing open model weights44:04 The relationship between the AI ecosystem and the models48:24 Wrapping up the discussion in the "trough of disillusionment"
    We've got some links for you again:- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle - MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer - Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property - Earth day history https://www.earthday.org/history/ - For our loyal listeners http://tudorsbiscuitworld.com/

    • 51 min
    AI's Eras Tour: Performance, Trust, and Legitimacy

    AI's Eras Tour: Performance, Trust, and Legitimacy

    Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.
    The Taylor moment: https://twitter.com/DrJimFan/status/1769817948930072930
    00:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area09:08 Mustafa Suleyman's new role and discussion on AI safety11:31 The shift from performance to trust in AI evaluation17:31 The role of government agencies in AI policy and regulation24:07 The role of accreditation in establishing legitimacy and trust32:11 Grok's open source release and its impact on the AI community39:34 Responsibility and accountability in AI and social media platforms

    • 46 min
    Claude 3: Is Nathan too bought into the hype?

    Claude 3: Is Nathan too bought into the hype?

    Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers!
    More at retortai.com. Contact us at mail at domain.
    Some topics:- The pace of progress in AI and whether it feels meaningful or like "progress fatigue" to different groups
    - The role of hype and "vibes" in driving interest and investment in new AI models 
    - Whether the value being created by large language models is actually just being concentrated in a few big tech companies
    - The debate around whether open source AI is feasible given the massive compute requirements
    - The limitations of "open letters" and events with Chatham House rules as forms of politics and accountability around AI
    - The analogy between the AI arms race and historical arms races like the dreadnought naval arms race
    - The role of narratives, pop culture, and "priesthoods" in shaping public understanding of AI

    Chapters & transcript partially created with https://github.com/FanaHOVA/smol-podcaster.
    00:00 Introduction and the spirit of open source04:32 Historical parallels of technology arms races10:26 The practical use of language models and their impact on society22:21 The role and potential of open source in AI development28:05 The challenges of achieving coordination and scale in open AI development34:18 Pop culture's influence on the AI conversation, specifically through "Dune"

    • 43 min
    Model release therapy session #1

    Model release therapy session #1

    This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy week in AI. We come to the conclusion that we can no longer trust a lot of these big companies to do much. We are the gladiators playing the crowd of AI. This was a great one, I'm proud on one of Tom's all time best jokes. Thanks for listening, and reach out with any questions.

    • 52 min
    Waymo vs. the time honored human experiences, vandalism and defacement

    Waymo vs. the time honored human experiences, vandalism and defacement

    A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay around distributed technology. Scooters and robots are not safe in this episode, sadly. Here's the link to the Verge piece Tom read from: https://www.theverge.com/2024/2/11/24069251/waymo-driverless-taxi-fire-vandalized-video-san-francisco-china-town

    • 37 min

Customer Reviews

4.8 out of 5
8 Ratings

8 Ratings

(-&(-: ,

Subtle Fun, Engaging, and Inform

Tom and Nathan have a good connection going on that brings creative cultural moment to describe the mania of ai. It’s dry humor and I love it! They sound so level, take ai seriously, but no mess. Thank you for this!

Vikram Sreekanti ,

Sizzling insights on the state of AI

Top-notch takes on the state of AI and insights into what’s actually going on with the LLM craze

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
The Neuron: AI Explained
The Neuron

You Might Also Like

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Dwarkesh Podcast
Dwarkesh Patel
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Practical AI: Machine Learning, Data Science
Changelog Media