AI Safety Newsletter

Centre for AI Safety
AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai

  1. 1 DE OUT.

    AISN #42: Newsom Vetoes SB 1047

    Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.) Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...] --- Outline: (00:18) Newsom Vetoes SB 1047 (01:55) OpenAI's o1 (06:44) AI Governance (10:32) Links The original text contained 3 images which were described by AI. --- First published: October 1st, 2024 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    13min
  2. 11 DE SET.

    AISN #41: The Next Generation of Compute Scale

    Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute. xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s. Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...] --- Outline: (00:18) The Next Generation of Compute Scale (04:36) Ranking Models by Susceptibility to Jailbreaking (06:07) Machine Ethics The original text contained 1 image which was described by AI. --- First published: September 11th, 2024 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12min
  3. 21 DE AGO.

    AISN #40: California AI Legislation

    Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling. The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was [...] --- Outline: (00:18) SB 1047, the Most-Discussed California AI Legislation (04:38) NVIDIA Delays Chip Production (06:49) Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? (10:22) Links The original text contained 1 image which was described by AI. --- First published: August 21st, 2024 Source: https://newsletter.safe.ai/p/aisn-40-california-ai-legislation --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14min
  4. 29 DE JUL.

    AISN #39: Implications of a Trump Administration for AI Policy

    Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance's views on AI policy, (2) views of key players in the administration, such as Trump's party, donors, and allies, and (3) why AI safety should remain bipartisan. Vance has pushed for reducing AI regulations and making AI weights open. At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations. However, Vance has also praised Lina Khan, Chair of the Federal Trade [...] --- Outline: (00:18) Implications of a Trump administration for AI policy (04:57) Safety Engineering (08:49) Links The original text contained 2 images which were described by AI. --- First published: July 29th, 2024 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12min
  5. 9 DE JUL.

    AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI

    Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Supreme Court Decision Could Limit Federal Ability to Regulate AI In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI. Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI. The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...] The original text contained 1 image which was described by AI. --- First published: July 9th, 2024 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    11min

Sobre

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai

Para ouvir episódios explícitos, inicie sessão.

Fique por dentro deste podcast

Inicie sessão ou crie uma conta para seguir podcasts, salvar episódios e receber as atualizações mais recentes.

Selecionar um país ou região

África, Oriente Médio e Índia

Ásia‑Pacífico

Europa

América Latina e Caribe

Estados Unidos e Canadá