a16z AI Policy Brief

a16z Policy

Your guide to AI public policy from the team at a16z. Each conversation bridges Washington and Little Tech, bringing together policy leaders, researchers, and builders to explore how the US stays ahead in AI. a16zpolicy.substack.com

  1. Catching Up on the Current Moment in AI Policy

    4 DAYS AGO

    Catching Up on the Current Moment in AI Policy

    In this conversation, Matt Perault, head of AI policy, and Collin McCune, head of government affairs, take stock of the current AI policy moment. As AI policy moves beyond rhetoric and into a more consequential phase, Matt and Collin separate signal from noise. They unpack where momentum is building in Washington, how state activity continues to drive the policy environment, and what it all means for Little Tech. Along the way, they dig into some of the most active debates including proposals focused on protecting kids, workforce disruption, data centers, benchmarking and licensing regimes, and the evolving balance between federal and state action. Enjoy. Topics covered: 01:14: The current AI policy moment 03:45: The White House National AI Framework: what’s new and what’s next 11:49: Kids, AI access, and the case against bans 17:49: Data centers, communities, and energy policy 19:56: Workforce disruption, retraining, and labor policy 25:32: Copyright, censorship, and other key debates 26:55: The Democrat perspective and response 32:27: Benchmarking, testing, and startup access 33:23: Licensing regimes and regulatory capture risks 38:16: What’s next in Congress? 44:00: States at the center of AI policymaking 48:22: Preemption, federalism, and the state-federal divide 55:27: Dormant Commerce Clause implications 58:59: Why Little Tech needs to stay engaged now Resources: Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Follow Matt Perault: https://x.com/MattPerault Follow Collin McCune: https://x.com/Collin_McCune Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    57 min
  2. Open Models, Measurable Safeguards

    16 APR

    Open Models, Measurable Safeguards

    Black Forest Labs has established itself as a pioneer in visual intelligence, with its open-weight FLUX models reaching over 50 million downloads on Hugging Face and rivaling models from Google, OpenAI, and DeepSeek in developer adoption. The company has distinguished itself not only through technical capability, but through a strong commitment to open research. In this conversation, Black Forest Labs’ Adam Chen and Ben Brooks, who lead the company’s legal and policy work, join Matt Perault to discuss what it means to build frontier visual AI openly. They explain the role of open models in advancing transparency, driving down the cost of innovation for developers, and strengthening security and sovereignty by reducing the world’s reliance on a handful of closed APIs. They also outline the unique policy challenges facing open-weight model developers. For policymakers, their message is clear: supporting open innovation does not require abandoning oversight. It requires targeted rules, analysis of where harms arise, and a better understanding of how proposed regulations land on smaller frontier labs, not just the largest incumbents. The conversation also offers a window into what it looks like to build a policy function at a startup. Adam and Ben offer a candid view into how they enable their small team to have outsized impact, rather than trying to match Big Tech’s playbook. Topics covered: 00:48: Intro 01:57: What is Black Forest Labs? 03:13: The makeup of a legal team at a frontier AI startup 07:14: The role of visual intelligence in the AI ecosystem 09:49: Core risks and baseline safeguards for visual models 10:34: Unique policy challenges of open-weight models 12:25: Restricting access to general-purpose technology should be a last resort 15:52: What’s at stake: open models as soft power and the China dynamic 20:07: BFL’s approach to being open and responsible 22:26: BFL’s model testing results 24:59: How a four-person legal team approaches disclosure and compliance 28:32: What works and what doesn’t in transparency proposals 31:07: Navigating the state, federal, and international patchwork as a startup 33:47: BFL’s advocacy goals 37:13: The Little Tech voice as a competitive advantage in the policy ecosystem This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    41 min
  3. Early Signals on AI, Hiring, and the Workforce

    31 MAR

    Early Signals on AI, Hiring, and the Workforce

    How is AI changing work? In this episode, Matt Perault sat down with Nick Catino, global head of policy at Deel, to better understand what today’s data can already tell us. Through its HR and payroll platform, Deel works with 35,000 customers and 1.5 million workers across more than 150 countries, giving the company a broad view across employers, geographies, and job categories as AI begins to change hiring and work. Nick walks through what Deel is seeing firsthand. That includes a 40% increase in the share of companies opening new AI roles in 2025. Deel’s recent global hiring report also found more than 70,000 AI trainer roles across 600-plus organizations, with nearly 60% of those roles located in the U.S., and AI trainer positions emerging as the fastest-growing global role on Deel’s platform. The conversation also explores what these shifts mean for policy. If AI is going to change how people work, Nick argues smart policy should focus on helping workers build AI fluency and new skills, supporting students as they prepare to enter the workforce, and giving startups the clarity they need as they hire and scale. Nick brings a valuable Little Tech perspective, drawing on his experience building public policy functions at fast-growing startups. For founders thinking about why startups need a seat at the table, along with when and how to engage with policymakers, this conversation is especially worthwhile. Topics covered: 03:14: Deel’s global hiring view 04:27: Building a startup policy function 09:12: Data as a policy tool 12:20: Early signals on AI and the workforce 14:47: Job shifts and emerging roles 17:30: Policy levers to support workers 24:22: Why regulatory certainty matters for Little Tech 27:01: Scaling Deel’s data insights 29:20: The rise of AI trainer roles 31:36: Lessons from building policy functions at fast-growing startups 35:05: Why policymakers want to hear from Little Tech Resources: Read Deel's global hiring report: https://www.deel.com/global-hiring-report-2026/ Learn more about Deel's HR and payroll platform: https://www.deel.com/partners/a16z.ecosystem?utm_source=podcast&utm_medium=partner-sourced&utm_content=a16z.ecosystem&utm_place=organic-community Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Follow Matt Perault: https://x.com/MattPerault Follow Nick Catino: https://x.com/CatinoNick Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    37 min
  4. Cyber Resilience in an AI World

    24 MAR

    Cyber Resilience in an AI World

    The cybersecurity landscape is moving to an AI-vs-AI world where both attackers and defenders can operate at machine speed. In this conversation, Anne Neuberger and Sam Jones join Jai Ramaswamy to go deeper on what this shift looks like in practice. Neuberger draws on nearly two decades in government—including serving as deputy national security advisor for cyber and emerging technology—to explain how AI is transforming the threat landscape, most notably making attacks faster, cheaper, and continuous at scale. Jones brings the builder’s perspective as CEO and cofounder of Method Security, where his team is building autonomous cyber systems for both offense and defense observing firsthand how AI is accelerating everything from routine tactics to exploit development. Together, they discuss what “cyber resilience” means in an AI world: continuous testing and red-teaming that was previously cost-prohibitive, clearer benchmarks for critical infrastructure, and faster recovery when disruptions happen. They also walk through the policy measures that can help defenders keep pace. Topics covered: 03:16: How AI changes the threat landscape 07:38: Net new risks in an AI-vs-AI cyber world 11:21: Building trust to deploy new technology in no-fail systems13:55: Cybercrime at machine speed 17:18: Who benefits more from AI: attackers or defenders?1 9:29: Tactics to remove friction for defenders 22:26: Real examples of incidents where AI could have changed outcomes: Colonial Pipeline and Change Healthcare 27:18: What cyber resilience means in an AI world30:57: Measuring resilience 36:59: Information sharing and antitrust: lessons from financial services and telecom compromises 44:59: The builder’s view: what Method Security is building for offense + defense 49:27: Little Tech realities of building with a small team and selling into government 52:53: The role of procurement in ensuring defensive systems keep pace with adversaries 55:32: What’s next: in-year buying flexibility and closing thoughts Resources: Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Follow Jai Ramaswamy: https://www.linkedin.com/in/jai-ramaswamy-85a77675/ Follow Anne Neuberger: https://www.linkedin.com/in/anne-neuberger-13b4491b/ Follow Sam Jones: @___sjones Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    56 min
  5. The Real AI Race With China Is Who Sets the Default

    17 MAR

    The Real AI Race With China Is Who Sets the Default

    In AI policy, it’s become a reflex to say we are in a global race with China. That shorthand can obscure the true nature of the competition. China and the U.S. aren’t just competing on model performance or chips, we’re competing on the next computing systems the world adopts, and along with it, who holds economic and political power for the next generation.  In this conversation, Jai Ramaswamy, our chief legal and policy officer, sits down with Matt Cronin, senior national security advisor at a16z, to make these competitive dynamics concrete. Cronin has worked on China-related national security issues as a federal prosecutor, held senior roles at the Department of Justice, served as Director of National Cybersecurity at the White House, and most recently as Chief Investigative Counsel and Deputy General Counsel to the U.S. House Select Committee on the Strategic Competition between the U.S. and China. Their conversation unpacks the incentives driving the Chinese Communist Party’s push to dominate AI and what’s at stake if the world defaults to CCP-aligned AI rails. They also get practical, outlining what “winning” looks like for the U.S.; the role of open source in global adoption; and the policy levers that play to America's strengths. Finally, they zoom in on one opportunity with outsized impact–defense procurement reform—where Cronin has spent significant time. If you care about the future defined by democratic values and are interested in a practical path to defend it, this is an episode for you. Topics covered: 01:19: The Chinese Communist Party’s motivations in the AI race 04:08: Why China’s “miss” on the internet shaped its push into AI 07:48: State-led vs. market-led innovation models 10:48: What happened to China’s VC ecosystem 13:09: China’s strategy for AI diffusion and adoption 17:02: What’s at stake if China wins the AI race 23:47: The 3 key measures of US success in AI 25:25: Why open source matters for global adoption 31:05: AI policy levers that play to America’s strengths in this global race 34:15: Why defense procurement reform matters to the competitive dynamic 42:22: Final takeaways on competition, policy, and democratic advantage Resources: Follow Jai Ramaswamy: https://twitter.com/jai_ramaswamy Follow Matt Cronin: https://www.linkedin.com/in/matt-cronin-8b88811 Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    45 min
  6. To Regulate AI Effectively, Focus on How It’s Used

    20 JAN

    To Regulate AI Effectively, Focus on How It’s Used

    One of the core pillars of a16z's roadmap for federal AI legislation makes clear AI should not excuse wrongdoing. When people or companies use AI to break the law, existing criminal, civil rights, consumer protection, and antitrust frameworks should still apply. Enforcement agencies should have the resources they need to enforce the law. If existing bodies of law fall short in accounting for certain AI use cases, any new laws should be evidence-based, clearly defining marginal risks and the optimal approach to target harms directly. In this conversation, we go deeper on what that principle means in practice with Martin Casado, general partner at a16z where he leads the firm’s infrastructure practice and invests in advanced AI systems and foundational compute. Martin joins Jai Ramaswamy and Matt Perault to discuss how decades of technology policy can inform addressing harmful uses of AI, defining marginal risk in AI, the importance of open source for long-term competitiveness, and more. Topics Covered: 01:55: A brief history of recent debates about how to regulate AI 12:30: Regulating use vs. development: lessons from software and cybersecurity 15:47: An open question in AI policy today: defining marginal risk 18:33: Why social media is often the wrong analogy for AI regulation 20:50: Enforcement tools available for holding bad actors to account 24:11: Balancing many trade-offs in tech policy 27:33: The role open source models play in soft power, the future of AI, and global competitiveness 38:06: Implications of regulatory uncertainty 41:32: Lawmakers want to act; what can they do now to enact effective policy? Resources: Follow Matt Perault: https://x.com/MattPerault Follow Jai Ramaswamy: https://twitter.com/jai_ramaswamy Follow Martin Casado: https://twitter.com/martin_casado Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    45 min
  7. 17/12/2025

    A Roadmap for Federal AI Legislation: Protect People, Empower Builders, Win the Future

    Debates in Washington often frame AI governance as a series of false choices: they pit innovation against safety, progress against protection, federal leadership against the rights of states. But at a16z, we believe these are not binaries. In order for America to realize the full promise of artificial intelligence, we must both build great products and protect people from AI-related harms. Congress can and should design a federal AI framework that protects individuals and families, while also safeguarding innovation and competition. This approach will allow startups and entrepreneurs, who we call Little Tech, to power America’s future growth while still addressing real risks. In this conversation, Jai Ramaswamy, chief legal and policy officer, Collin McCune, head of government affairs, and Matt Perault, head of AI policy at a16z discuss the current moment in AI policy along with a16z's AI policy agenda built on nine pillars that work to keep Americans safe while keeping the U.S. in the lead. Topics Covered: 00:00: Intro 00:58: Recapping the current moment in AI policy: state proposals, EO, and preemption debates 09:17: Is Congress gridlocked on AI? 12:09: Are safety and innovation at odds 16:35: a16z’s policy agenda and 9-pillar roadmap to federal AI legislation 22:32: Protecting kids from AI-related harms 24:49: US AI leadership, China, and competition 29:04: Cybersecurity and national security risks 34:59: What’s next for federal AI legislation Resources: Follow Matt Perault: https://x.com/MattPerault Follow Collin McCune: https://x.com/Collin_McCune Follow Jai Ramaswamy: https://www.linkedin.com/in/jai-ramaswamy-85a77675 Stay updated: Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    38 min
  8. 12/12/2025

    Beyond Preemption: Lessons from the 1996 Telecom Act

    If you squint at today’s AI policy debates, you may see the Telecommunications Act of 1996 in the distance. In this conversation, Matt Perault, head of AI policy, a16z, sits down with Adam Thierer, resident senior fellow, technology and innovation, R Street Institute, and Blair Levin, policy analyst, New Street Research and non-resident senior associate, Center for Strategic and International Studies, to revisit their first-hand experience tackling a similarly significant moment in tech policy: a small number of incumbents with entrenched market power, a messy patchwork of federal and local rules, and misaligned governing authority. The result then was federal preemption coupled with a comprehensive national framework for telecommunications—all through a bipartisan deal. Topics Covered: 00:00: The Telecom Act’s “big bargain” 02:05: Competition as the heart of the deal 04:26: Telecom’s regulatory thicket and move to a national framework 07:39: Preemption, ambiguity, and the FCC’s role 11:58: How the Telecom Act got done: politics, persuasion, and public opinion 17:39: Terminating access charges and “regulating on behalf of” the internet 21:13: Federal vs. state authority and lessons for AI 26:09: Leadership, vision, and a new “constitutional moment” for tech policy 34:57: Institutional capacity and the missing expert home for AI 39:55: What a “Telecom Act for AI” might look like Resources: Follow Matt Perault: https://x.com/MattPerault Subscribe to the a16z AI Policy Brief: https://a16zpolicy.substack.com/ Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit a16zpolicy.substack.com

    46 min

About

Your guide to AI public policy from the team at a16z. Each conversation bridges Washington and Little Tech, bringing together policy leaders, researchers, and builders to explore how the US stays ahead in AI. a16zpolicy.substack.com

You Might Also Like