Summary/specific topics:
- Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic downturns and the potential for a domino effect if one bank fails. They believe that lessons from this process can be applied to AI regulation. Aaron agrees, but also highlights the need for a proactive approach to AI regulation, as opposed to the reactive measures often seen in banking regulation.
- The Role of Central Banks in AI Regulation: Nathan suggests that institutions structured like central banks, staffed with technical experts and independent from government, could be beneficial for AI regulation. They believe such institutions could respond quickly and effectively to crises. However, they acknowledge that this approach may not be effective if AI development leads to rapid, uncontrollable self-improvement.
- Compute Governance: The conversation then shifts to compute governance, which Nathan sees as a promising area for AI regulation due to the obviousness of someone using large amounts of compute. They believe that this could provide governments with a control lever over cutting-edge AI labs, similar to how central banks control banking loans and affairs.
- AI Regulation and the Role of Public Actors: Nathan acknowledges that the leaders of major AI labs seem sensible and aligned with AI safety principles. However, they argue that regulation and public actors can play a crucial role in creating common knowledge between labs and preventing a race to the bottom. They also discuss the potential benefits and drawbacks of different regulatory approaches.
- Financial Regulation as a Model for AI Regulation: Nathan believes that post-crisis financial regulation, such as the Dodd-Frank Act, has generally been effective. They suggest that AI regulation could follow a similar path, especially if AI becomes a significant part of the economy. However, Aaron expresses skepticism about the ability of political processes to produce effective AI regulation.
- Regulation Before and After Crises: The speakers agree that pre-crisis regulation has generally been less effective than post-crisis regulation. They discuss the potential for AI regulation to follow a similar pattern, with effective regulation emerging in response to a crisis.
- Regulatory Arbitrage: The conversation concludes with a discussion on regulatory arbitrage, where banks shift activities to where it's cheapest to do business. Despite evidence of this behavior, Nathan notes that there was no race to the bottom in terms of regulation during the financial crisis.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe
Information
- Show
- PublishedJuly 13, 2023 at 3:13 AM UTC
- Length50 min
- RatingClean