Scaling Laws

Lawfare & University of Texas Law School

Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.

  1. 3日前

    Why AI Needs Independent Auditors, with Miles Brundage

    Alan Rozenshtein, research director at Lawfare, spoke with Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, about the state of AI safety and accountability and AVERI's vision for independent third-party auditing of frontier AI companies. The conversation covered the weaknesses of current AI regulations, including California's SB 53 and New York's RAISE Act; why Brundage left OpenAI to build an independent nonprofit; AVERI's case for shifting the unit of analysis from individual AI models to the organizations that build them; the "Volkswagen problem" of deception-proofing safety evaluations; a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification; the limitations of safety benchmarks and the BenchRisk project's findings; market-based mechanisms for driving audit adoption, including insurance, procurement, and investor pressure; and how AVERI navigates the tension between proximity to industry and independence from it. Mentioned in this episode:   Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies, Averi 2026Risk Management for Mitigating Benchmark Failure Modes: BenchRisk, NeurIPS 2025Why I'm Leaving OpenAI and What I'm Doing Next, Miles Brundage, Substack, October 2024 Hosted on Acast. See acast.com/privacy for more information.

    53分
  2. 3月24日

    Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield

    Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield   Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks.   The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening.   Mentioned in this episode: Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025) Hosted on Acast. See acast.com/privacy for more information.

    50分

評価とレビュー

番組について

Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.

その他のおすすめ