AI Summer

Timothy B. Lee and Dean W. Ball

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org

  1. Nat Purser explains how progressives are thinking about AI

    3D AGO

    Nat Purser explains how progressives are thinking about AI

    Tim talks to Nat Purser, a tech policy advocate at Public Knowledge and a veteran of Democratic campaigns, about how policymakers on the left side of the political spectrum view AI. Purser describes a Democratic landscape split between those who see AI as a real but threatening force and those who dismiss it as another crypto-style bubble. She traces how Sen. Bernie Sanders broke from the pack by treating AI as genuinely transformative—meeting with AI safety figures like Eliezer Yudkowsky and Nate Soares, proposing a federal data center moratorium with Rep. Alexandria Ocasio-Cortez, and openly saying he uses Claude himself. Purser contrasts this with the dismissive attitude she sometimes encounters among progressive elites. She also details the fractures within labor: Hollywood actors and writers see AI as an existential threat to creativity, while construction unions welcome data center jobs. On the legislative front, she recounts how a bipartisan coalition crushed Ted Cruz’s ten-year preemption of state AI laws in a 99–1 vote, and argues that narrowly scoped preemption paired with federal standards is the only defensible approach. Purser predicts the "stochastic parrots" camp — those who dismiss AI as mere corporate hype — will lose influence as AI capabilities grow. But it’s too early to say whether Democratic leaders, including the next Democratic presidential nominee, will embrace Sanders’s apocalyptic framing or take a more conventional approach focused on issues like privacy and nondiscrimination. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1h 18m
  2. Ryan Avent on self-driving cars and the future of the labor market

    MAR 22

    Ryan Avent on self-driving cars and the future of the labor market

    Author Ryan Avent joins Tim to revisit a bet they made 16 years ago—and to ask whether the lessons of self-driving cars apply to modern AI. Back in 2010, Avent wagered that his newborn daughter would never need a driver’s license thanks to self-driving cars. Tim bet she would and ultimately won $500. But he was right for the wrong reasons. Tim assumed regulation would be a major obstacle to progress in self-driving technology, but logistical challenges and a long tail of edge cases have done more to hamper Waymo’s growth. The parallel to LLMs is striking: ChatGPT’s early demos convinced many people that we were close to human-level intelligence, just as Google’s early autonomous vehicle demos convinced people we were close to human-level driving. But deployment of LLMs is bottlenecked by everything from data center buildouts to the glacial pace at which large organizations reorganize around new tools. Avent, who wrote The Wealth of Humans in 2016 and has a new book on social capital arriving in April, argues that AI’s deepest impact won’t be unemployment but a wholesale reshuffling of status. White-collar professionals may face the same loss of prestige that blue-collar workers experienced a generation ago. Tim pushes back with an optimistic take: if the college wage premium compresses, the long-run equilibrium might actually be more egalitarian, echoing the mid-20th-century economy some people remember fondly. But we only got to that economy after two world wars and decades of organizing by the labor movement. Could today’s transition be equally turbulent? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1h 5m
  3. AI safety in India, AV operators in the Philippines

    FEB 16

    AI safety in India, AV operators in the Philippines

    Dean recorded this episode as he was preparing to attend the India AI Impact Summit — the fourth iteration of an annual gathering that has transformed from an intimate AI Safety Summit with heads of state to something resembling a tech industry trade show. The shift in branding, from “safety” to “action” to “impact,” reflects a broader vibe shift in how elites talk about AI risk, and Dean worries that we may have overcorrected. Dean argues that the mainstream AI governance community is focused on the wrong priorities. While policymakers worldwide draft hundreds of bills on algorithmic discrimination and mental health chatbots, they’re ignoring the genuinely urgent questions about automated AI R&D and catastrophic risk. He supports SB53, California’s new responsible scaling policy law, but thinks the real gap is verification — we need something like financial auditing for AI safety commitments, not Twitter fights over whether OpenAI followed its own responsible scaling policy. The alternative, a Josh Hawley-style licensing regime run by the Department of Energy, strikes Dean as repeating the FDA’s mistakes. We also discuss a viral video clip of Senator Ed Markey (D-MA) grilling a Waymo executive about Philippines-based remote operators. Tim argues there are legitimate reasons to prefer U.S.-based operators for safety-critical roles. The episode closes with a question that haunts both of us: are we too wealthy and comfortable to tolerate the messiness of another industrial revolution? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1h 4m
  4. FEB 8

    Dean is back!

    Dean Ball is back. In April 2025, Dean left the podcast to join the White House Office of Science and Technology Policy, where he spent four months working on the Trump administration’s AI policies—including executive orders, the AI action plan, and AI geopolitics. He’s since returned to independent writing and research, and at the end of 2025, he and his wife welcomed their first child. In this episode, we catch up on what’s changed in AI over the past ten months. Dean makes the case that coding agents like Claude Code represent something close to digital AGI: models that can reliably do pretty much anything a human can do on a computer, as long as you know what to ask. He describes projects he’s built—from automated state legislation monitoring to due diligence reports on real estate—that would have been impossible a year ago. Tim is more measured, noting that users still provide crucial architectural guidance and that the models still struggle with long-horizon planning. The conversation turns to what happens when AI starts automating AI research itself. Dean expects significant speedups as models take over routine experimentation and code-writing at frontier labs, but he’s skeptical of the “intelligence explosion” scenario. We discuss why the physical world keeps fighting back against exponential improvement, why discoveries follow heavy-tailed distributions, and why—despite all the hype—the world probably won’t feel fundamentally different by June. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1 hr

Ratings & Reviews

5
out of 5
12 Ratings

About

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org

You Might Also Like