29 episodes

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

The Retort AI Podcast Thomas Krendl Gilbert and Nathan Lambert

    • Technology
    • 4.8 • 9 Ratings

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

    Murky waters in AI policy

    Murky waters in AI policy

    Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy.
    00:00 Murky waters in AI policy00:33 The Senate AI Roadmap05:14 The Executive Branch Takes the Lead08:33 California's Senate AI Bill22:22 OpenAI's Two Audiences28:53 The Problem with OpenAI Model Spec39:50 A New World of AI Regulation
    A bunch of links...Data and society whitepaper: https://static1.squarespace.com/static/66465fcd83d1881b974fe099/t/664b866c9524f174acd7931c/1716225644575/24.05.18+-+AI+Shadow+Report+V4.pdfhttps://senateshadowreport.com/ 
    California billhttps://www.hyperdimensional.co/p/california-senate-passes-sb-1047 https://legiscan.com/CA/text/SB1047/id/2999979 
    Data wallshttps://www.interconnects.ai/p/the-data-wall 
    Interconnects Merchhttps://interconnects.myshopify.com/

    • 43 min
    ChatGPT talks: diamond of the season or quite the scandal?

    ChatGPT talks: diamond of the season or quite the scandal?

    Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.html
    This is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.
    00:00 Guess the Donkey Kong Character00:50 OpenAI's New AI Girlfriend07:08 OpenAI's Business Model and Responsible AI08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness12:48 OpenAI and the Mystery Box19:10 The Blurring Boundaries of Intimacy and Technology22:05 Rousseau's Discourse on Inequality and the Impact of Technology26:16 OpenAI's Model Spec and Its Objectives30:10 The Unintelligibility of "Benefiting Humanity"37:01 The Chain of Command and the Paradox of AI Love45:46 The Form and Content of OpenAI's Model Spec48:51 The Future of AI and Societal Disruptions

    • 51 min
    Three pillars of AI power

    Three pillars of AI power

    Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence. 
    Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/1653843497932267520
    00:00: Introduction and Cryptozoologists02:00: DC and the National AI Research Resource (NAIR)05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC11:00: The AI Safety vs. Ethics Debate13:42: The Rise of the Third Entity: The Government's Role in AI19:42: New York's Influence and the Power of Narrative29:36: Silicon Valley's Insularity and the Need for Regulation36:50: The Amazon Antitrust Paradox and the Shifting Landscape48:20: The Energy Conundrum and the Need for Policy Solutions56:34: Conclusion: Finding Common Ground and Building a Better Future for AI

    • 56 min
    Llama 3: Can't Compete with a Capuchin

    Llama 3: Can't Compete with a Capuchin

    Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. 
    Links:Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey 
    00:00 Introductions & advice from a wolf00:45 Llama 307:15 Resources and investment required for large language models14:10 What it means to be a leader in the rapidly evolving AI landscape22:07 How much of AI progress is driven by stories vs resources29:41 Critiquing the concept of Artificial General Intelligence (AGI)38:10 Misappropriation of the term AGI by tech leaders42:09 The future of open models and AI development

    • 46 min
    Into the AI Trough of Disillusionment

    Into the AI Trough of Disillusionment

    Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.
    00:00 Introduction 01:16 Recent developments in open model releases 04:21 Tom's experience viewing the total solar eclipse09:38 The Three-Body Problem book and Netflix14:06 The Gartner Hype Cycle22:51 Infrastructure constraints on scaling AI28:47 Metaphors and narratives around AI risk34:43 Rethinking AI risk as public health problems37:37 The "one-way door" nature of releasing open model weights44:04 The relationship between the AI ecosystem and the models48:24 Wrapping up the discussion in the "trough of disillusionment"
    We've got some links for you again:- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle - MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer - Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property - Earth day history https://www.earthday.org/history/ - For our loyal listeners http://tudorsbiscuitworld.com/

    • 51 min
    AI's Eras Tour: Performance, Trust, and Legitimacy

    AI's Eras Tour: Performance, Trust, and Legitimacy

    Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.
    The Taylor moment: https://twitter.com/DrJimFan/status/1769817948930072930
    00:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area09:08 Mustafa Suleyman's new role and discussion on AI safety11:31 The shift from performance to trust in AI evaluation17:31 The role of government agencies in AI policy and regulation24:07 The role of accreditation in establishing legitimacy and trust32:11 Grok's open source release and its impact on the AI community39:34 Responsibility and accountability in AI and social media platforms

    • 46 min

Customer Reviews

4.8 out of 5
9 Ratings

9 Ratings

(-&(-: ,

Subtle Fun, Engaging, and Inform

Tom and Nathan have a good connection going on that brings creative cultural moment to describe the mania of ai. It’s dry humor and I love it! They sound so level, take ai seriously, but no mess. Thank you for this!

Vikram Sreekanti ,

Sizzling insights on the state of AI

Top-notch takes on the state of AI and insights into what’s actually going on with the LLM craze

Top Podcasts In Technology

Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
Search Engine
PJ Vogt, Audacy, Jigsaw
TED Radio Hour

You Might Also Like

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
This Day in AI Podcast
Michael Sharkey, Chris Sharkey
Dwarkesh Podcast
Dwarkesh Patel
a16z Podcast
Andreessen Horowitz
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington