35 episodes

Readings from the AI Safety Fundamentals: Governance course.

AI Safety Fundamentals: Governance Blue Dot Impact

    • Technology

Readings from the AI Safety Fundamentals: Governance course.

    [Week 1] “A short introduction to machine learning” by Richard Ngo

    [Week 1] “A short introduction to machine learning” by Richard Ngo

    Despite the current popularity of machine learning, I haven’t found any short introductions to it which quite match the way I prefer to introduce people to the field. So here’s my own. Compared with other introductions, I’ve focused less on explaining each concept in detail, and more on explaining how they relate to other important concepts in AI, especially in diagram form. If you're new to machine learning, you shouldn't expect to fully understand most of the concepts explained here just after reading this post - the goal is instead to provide a broad framework which will contextualise more detailed explanations you'll receive from elsewhere. I'm aware that high-level taxonomies can be controversial, and also that it's easy to fall into the illusion of transparency when trying to introduce a field; so suggestions for improvements are very welcome! The key ideas are contained in this summary diagram: First, some quick clarifications: None of the boxes are meant to be comprehensive; we could add more items to any of them. So you should picture each list ending with “and others”. The distinction between tasks and techniques is not a firm or standard categorisation; it’s just the best way I’ve found so far to lay things out. The summary is explicitly from an AI-centric perspective. For example, statistical modeling and optimization are fields in their own right; but for our current purposes we can think of them as machine learning techniques.
    Original text:
    https://www.alignmentforum.org/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learning
    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---

    [Week 1] “The AI Triad and What It Means for National Security Strategy” by Ben Buchanan

    [Week 1] “The AI Triad and What It Means for National Security Strategy” by Ben Buchanan

    A single sentence can summarize the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. Everything that national security policymakers truly need to know about a technology that seems simultaneously trendy, powerful, and mysterious is captured in those 13 words. They specify a paradigm for modern AI—machine learning—in which machines draw their own insights from data, unlike the human-driven expert systems of the past.

    The same sentence also introduces the AI triad of algorithms, data, and computing power. Each element is vital to the power of machine learning systems, though their relative priority changes based on technological developments.

    Source:
    https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf

    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---

    [Week 1] “Visualizing the deep learning revolution” by Richard Ngo

    [Week 1] “Visualizing the deep learning revolution” by Richard Ngo

    The field of AI has undergone a revolution over the last decade, driven by the success of deep learning techniques. This post aims to convey three ideas using a series of illustrative examples:
    There have been huge jumps in the capabilities of AIs over the last decade, to the point where it’s becoming hard to specify tasks that AIs can’t do.This progress has been primarily driven by scaling up a handful of relatively simple algorithms (rather than by developing a more principled or scientific understanding of deep learning).Very few people predicted that progress would be anywhere near this fast; but many of those who did also predict that we might face existential risk from AGI in the coming decades.I’ll focus on four domains: vision, games, language-based tasks, and science. The first two have more limited real-world applications, but provide particularly graphic and intuitive examples of the pace of progress.
    Original article:
    https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5
    Author:
    Richard Ngo

    [Week 2] Overview of how AI might exacerbate long-running catastrophic risks

    [Week 2] Overview of how AI might exacerbate long-running catastrophic risks

    Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).

    Source:
    https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbation

    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---

    [Week 2] The Need For Work On Technical AI Alignment

    [Week 2] The Need For Work On Technical AI Alignment

    This page gives an overview of the alignment problem. It describes our motivation for running courses about technical AI alignment. The terminology should be relatively broadly accessible (not assuming any previous knowledge of AI alignment or much knowledge of AI/computer science).

    This piece describes the basic case for AI alignment research, which is research that aims to ensure that advanced AI systems can be controlled or guided towards the intended goals of their designers. Without such work, advanced AI systems could potentially act in ways that are severely at odds with their designers’ intended goals. Such a situation could have serious consequences, plausibly even causing an existential catastrophe.
    In this piece, I elaborate on five key points to make the case for AI alignment research.

    Source:
    https://aisafetyfundamentals.com/alignment-introduction

    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---

    [Week 2] “As AI agents like Auto-GPT speed up generative AI race, we all need to buckle up” by Sharon Goldman

    [Week 2] “As AI agents like Auto-GPT speed up generative AI race, we all need to buckle up” by Sharon Goldman

    If you thought the pace of AI development had sped up since the release of ChatGPT last November, well, buckle up. Thanks to the rise of autonomous AI agents like Auto-GPT, BabyAGI and AgentGPT over the past few weeks, the race to get ahead in AI is just getting faster. And, many experts say, more concerning.

    Source:
    https://venturebeat.com/ai/as-ai-agents-like-auto-gpt-speed-up-generative-ai-race-we-all-need-to-buckle-up-the-ai-beat/

    Narrated for AI Safety Fundamentals by TYPE III AUDIO.
    ---

Top Podcasts In Technology

BBC Radio 4
Cool Zone Media
RS DesignSpark
Lex Fridman
Ben Gilbert and David Rosenthal
Jack Rhysider

You Might Also Like

Kognitos
Everyday AI
Michael Sharkey, Chris Sharkey
PJ Vogt, Audacy, Jigsaw