41 episodes

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

This podcast also contains narrations of some of our publications.

ABOUT US

The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.

Learn more at https://safe.ai

AI Safety Newsletter Centre for AI Safety

    • Technology

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

This podcast also contains narrations of some of our publications.

ABOUT US

The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.

Learn more at https://safe.ai

    AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI

    AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI

    Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    Supreme Court Decision Could Limit Federal Ability to Regulate AI
    In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI.
    Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI.
    The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...]
    ---
    Outline:
    (00:16) Supreme Court Decision Could Limit Federal Ability to Regulate AI
    (02:18) “Circuit Breakers” for AI Systems
    (04:45) Updates on China's AI Industry
    (07:32) Links
    The original text contained 1 image which was described by AI.
    ---

    First published:

    July 9th, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    • 10 min
    AISN #37: US Launches Antitrust Investigations

    AISN #37: US Launches Antitrust Investigations

    US Launches Antitrust Investigations
    The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation into Nvidia while the FTC will focus on OpenAI and Microsoft.
    Antitrust investigations are conducted by government agencies to determine whether companies are engaging in anticompetitive practices that may harm consumers and stifle competition.
    Nvidia investigated for GPU dominance. The New York Times reports that concerns have been raised about Nvidia's dominance in the GPU market, “including how the company's software locks [...]
    ---
    Outline:
    (00:10) US Launches Antitrust Investigations
    (02:58) Recent Criticisms of OpenAI and Anthropic
    (05:40) Situational Awareness
    (09:14) Links
    ---

    First published:

    June 18th, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    • 11 min
    AISN #36: Voluntary Commitments are Insufficient

    AISN #36: Voluntary Commitments are Insufficient

    Voluntary Commitments are Insufficient
    AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments.
    Some commitments from the agreement include:
    Assessing risks posed by AI models and systems throughout the AI lifecycle.
    Setting thresholds for severe risks, defining when a model or system would pose intolerable risk if not adequately mitigated.
    Keeping risks within defined thresholds, such as by modifying system behaviors and implementing robust security controls.
    Potentially halting development or deployment if risks cannot be sufficiently mitigated.
    These commitments [...]
    ---
    Outline:
    (00:03) Voluntary Commitments are Insufficient
    (02:45) Senate AI Policy Roadmap
    (05:18) Chapter 1: Overview of Catastrophic Risks
    (07:56) Links
    ---

    First published:

    May 30th, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    • 10 min
    AISN #35: Lobbying on AI Regulation

    AISN #35: Lobbying on AI Regulation

    OpenAI and Google Announce New Multimodal Models
    In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers.
    More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities.
    But OpenAI did announce a new model last week, called GPT-4o. The “o” stands for “omni,” referring to the fact that the model can use text, images, videos [...]
    ---
    Outline:
    (00:03) OpenAI and Google Announce New Multimodal Models
    (02:36) The Surge in AI Lobbying
    (05:29) How Should Copyright Law Apply to AI Training Data?
    (10:10) Links
    ---

    First published:

    May 16th, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    • 12 min
    AISN #34: New Military AI Systems

    AISN #34: New Military AI Systems

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute
    In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through.
    OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI.
    Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.”
    When asked about their concerns with pre-deployment testing [...]
    ---
    Outline:
    (00:03) AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute
    (02:17) New Bipartisan AI Policy Proposals in the US Senate
    (06:35) Military AI in Israel and the US
    (11:44) New Online Course on AI Safety from CAIS
    (12:38) Links
    ---

    First published:

    May 1st, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    • 17 min
    AISN #33: Reassessing AI and Biorisk

    AISN #33: Reassessing AI and Biorisk

    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    This week, we cover:
    Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders.
    Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia.
    Congress's budget for 2024 provides some but not all of the requested funding for AI policy. The White House's 2025 proposal makes more ambitious requests for AI funding.
    How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others.
    AI Startups Seek Support From Large Financial Backers
    As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world's [...]
    ---
    Outline:
    (00:45) AI Startups Seek Support From Large Financial Backers
    (03:47) National AI Investments
    (05:16) Federal Spending on AI
    (08:35) An Updated Assessment of AI and Biorisk
    (15:35) $250K in Prizes: SafeBench Competition Announcement
    (16:08) Links
    ---

    First published:

    April 11th, 2024


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessing

    ---
    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    • 20 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
TED Radio Hour
NPR
Search Engine
PJ Vogt, Audacy, Jigsaw