47 episodes

Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance

AI Safety Fundamentals: Governance BlueDot Impact

    • Technology

Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance

    Strengthening Resilience to AI Risk: A Guide for UK Policymakers

    Strengthening Resilience to AI Risk: A Guide for UK Policymakers

    This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” 
    While this document is designed for UK policymakers, most of its findings are broadly applicable.

    Original text:
    https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf

    Authors:
    Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar Avi

    • 24 min
    The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

    The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

    This report by the Nuclear Threat Initiative primarily focuses on how AI's integration into biosciences could advance biotechnology but also poses potentially catastrophic biosecurity risks. It’s included as a core resource this week because the assigned pages offer a valuable case study into an under-discussed lever for AI risk mitigation: building resilience. 
    Resilience in a risk reduction context is defined by the UN as “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.” As you’re reading, consider other areas where policymakers might be able to build a more resilient society to mitigate AI risk.

    Original text:
    https://www.nti.org/wp-content/uploads/2023/10/NTIBIO_AI_FINAL.pdf

    Authors:
    Sarah R. Carter, Nicole E. Wheeler, Sabrina Chwalek, Christopher R. Isaac, and Jaime Yassif

    • 8 min
    What is AI Alignment?

    What is AI Alignment?

    To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.”

    Original text:
    https://aisafetyfundamentals.com/blog/what-is-ai-alignment/

    Author:
    Adam Jones

    • 11 min
    Rogue AIs

    Rogue AIs

    This excerpt from CAIS’s AI Safety, Ethics, and Society textbook provides a deep dive into the CAIS resource from session three, focusing specifically on the challenges of controlling advanced AI systems.

    Original Text:
    https://www.aisafetybook.com/textbook/1-5

    Author:
    The Center for AI Safety

    • 34 min
    An Overview of Catastrophic AI Risks

    An Overview of Catastrophic AI Risks

    This article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.

    Original text:
    https://www.safe.ai/ai-risk

    Authors:
    Dan Hendrycks, Thomas Woodside, Mantas Mazeika

    • 45 min
    Future Risks of Frontier AI

    Future Risks of Frontier AI

    This report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.
    Original text:
    https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf

    Author:
    The UK Government Office for Science

    • 40 min

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
This Week in XR Podcast
Charlie Fink Productions

You Might Also Like

Discover Daily by Perplexity
Perplexity
Hard Fork
The New York Times
POLITICO Tech
POLITICO
Freakonomics Radio
Freakonomics Radio + Stitcher
FT News Briefing
Financial Times
The Daily
The New York Times