AI Summer

Timothy B. Lee and Dean W. Ball

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org

  1. 1D AGO

    AI safety in India, AV operators in the Philippines

    Dean recorded this episode as he was preparing to attend the India AI Impact Summit — the fourth iteration of an annual gathering that has transformed from an intimate AI Safety Summit with heads of state to something resembling a tech industry trade show. The shift in branding, from “safety” to “action” to “impact,” reflects a broader vibe shift in how elites talk about AI risk, and Dean worries that we may have overcorrected. Dean argues that the mainstream AI governance community is focused on the wrong priorities. While policymakers worldwide draft hundreds of bills on algorithmic discrimination and mental health chatbots, they’re ignoring the genuinely urgent questions about automated AI R&D and catastrophic risk. He supports SB53, California’s new responsible scaling policy law, but thinks the real gap is verification — we need something like financial auditing for AI safety commitments, not Twitter fights over whether OpenAI followed its own responsible scaling policy. The alternative, a Josh Hawley-style licensing regime run by the Department of Energy, strikes Dean as repeating the FDA’s mistakes. We also discuss a viral video clip of Senator Ed Markey (D-MA) grilling a Waymo executive about Philippines-based remote operators. Tim argues there are legitimate reasons to prefer U.S.-based operators for safety-critical roles. The episode closes with a question that haunts both of us: are we too wealthy and comfortable to tolerate the messiness of another industrial revolution? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1h 4m
  2. FEB 8

    Dean is back!

    Dean Ball is back. In April 2025, Dean left the podcast to join the White House Office of Science and Technology Policy, where he spent four months working on the Trump administration’s AI policies—including executive orders, the AI action plan, and AI geopolitics. He’s since returned to independent writing and research, and at the end of 2025, he and his wife welcomed their first child. In this episode, we catch up on what’s changed in AI over the past ten months. Dean makes the case that coding agents like Claude Code represent something close to digital AGI: models that can reliably do pretty much anything a human can do on a computer, as long as you know what to ask. He describes projects he’s built—from automated state legislation monitoring to due diligence reports on real estate—that would have been impossible a year ago. Tim is more measured, noting that users still provide crucial architectural guidance and that the models still struggle with long-horizon planning. The conversation turns to what happens when AI starts automating AI research itself. Dean expects significant speedups as models take over routine experimentation and code-writing at frontier labs, but he’s skeptical of the “intelligence explosion” scenario. We discuss why the physical world keeps fighting back against exponential improvement, why discoveries follow heavy-tailed distributions, and why—despite all the hype—the world probably won’t feel fundamentally different by June. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    1 hr
  3. James Grimmelmann on the copyright threat to AI companies

    03/19/2025

    James Grimmelmann on the copyright threat to AI companies

    James Grimmelmann is a professor of law at Cornell University and a leading expert on copyright law. Grimmelmann walks through the complex process courts use to determine whether training AI models on copyrighted materials—like OpenAI using New York Times articles—is infringement or fair use. He highlights key precedents like the Google Books case, emphasizing how courts weigh transformative uses against potential market harms. The discussion addresses the nuances of generative AI, notably cases where models inadvertently reproduce large excerpts from training materials. Grimmelmann argues that while the industry has largely addressed explicit "regurgitation," ambiguity remains around subtler forms of copying, particularly with image-generating models, which could substantially impact copyright holders like Getty Images. Grimmelmann and the hosts delve into potential legal outcomes, including moderate rulings that force licensing agreements, or harsher ones that could significantly restrict the availability of open-source AI models. The interview also touches on Congress's historical reluctance to intervene in contentious digital copyright issues, leaving critical decisions to be gradually shaped by court rulings. Dean and Tim conclude that while an outright shutdown of generative AI by courts is improbable, the forthcoming legal decisions will likely reshape the industry's structure, potentially favoring larger companies capable of negotiating extensive licensing deals. Grimmelmann anticipates initial district court rulings within the year and appellate decisions by 2026, setting the stage for a pivotal shift in how AI companies use copyrighted works. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

    53 min

Ratings & Reviews

5
out of 5
10 Ratings

About

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org

You Might Also Like