15 episodes

Listen to resources from the AI Safety Fundamentals: Alignment 201 course!https://course.aisafetyfundamentals.com/alignment-201

AI Safety Fundamentals: Alignment 201 BlueDot Impact

    • Technology

Listen to resources from the AI Safety Fundamentals: Alignment 201 course!https://course.aisafetyfundamentals.com/alignment-201

    Empirical Findings Generalize Surprisingly Far

    Empirical Findings Generalize Surprisingly Far

    Previously, I argued that emergent phenomena in machine learning mean that we can’t rely on current trends to predict what the future of ML will be like. In this post, I will argue that despite this, empirical findings often do generalize very far, including across “phase transitions” caused by emergent behavior.


    This might seem like a contradiction, but actually I think divergence from current trends and empirical generalization are consistent. Findings do often generalize, but you need to think to determine the right generalization, and also about what might stop any given generalization from holding.


    I don’t think many people would contest the claim that empirical investigation can uncover deep and generalizable truths. This is one of the big lessons of physics, and while some might attribute physics’ success to math instead of empiricism, I think it’s clear that you need empirical data to point to the right mathematics.


    However, just invoking physics isn’t a good argument, because physical laws have fundamental symmetries that we shouldn’t expect in machine learning. Moreover, we care specifically about findings that continue to hold up after some sort of emergent behavior (such as few-shot learning in the case of ML). So, to make my case, I’ll start by considering examples in deep learning that have held up in this way. Since “modern” deep learning hasn’t been around that long, I’ll also look at examples from biology, a field that has been around for a relatively long time and where More Is Different is ubiquitous (see Appendix: More Is Different In Other Domains).


    Source:
    https://bounded-regret.ghost.io/empirical-findings-generalize-surprisingly-far/


    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 11 min
    Worst-Case Thinking in AI Alignment

    Worst-Case Thinking in AI Alignment

    Alternative title: “When should you assume that what could go wrong, will go wrong?” Thanks to Mary Phuong and Ryan Greenblatt for helpful suggestions and discussion, and Akash Wasil for some edits. In discussions of AI safety, people often propose the assumption that something goes as badly as possible. Eliezer Yudkowsky in particular has argued for the importance of security mindset when thinking about AI alignment. I think there are several distinct reasons that this might be the right assumption to make in a particular situation. But I think people often conflate these reasons, and I think that this causes confusion and mistaken thinking. So I want to spell out some distinctions. Throughout this post, I give a bunch of specific arguments about AI alignment, including one argument that I think I was personally getting wrong until I noticed my mistake yesterday (which was my impetus for thinking about this topic more and then writing this post). I think I’m probably still thinking about some of my object level examples wrong, and hope that if so, commenters will point out my mistakes.
    Original text:
    https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment
    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 11 min
    Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

    Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

    Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal that least-to-most prompting is capable of generalizing to more difficult problems than those seen in the prompts. A notable finding is that when the GPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve the compositional generalization benchmark SCAN in any split (including length split) with an accuracy of at least 99% using just 14 exemplars, compared to only 16% accuracy with chain-of-thought prompting. This is particularly noteworthy because neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples. We have included prompts for all the tasks in the Appendix.


    Source:
    https://arxiv.org/abs/2205.10625


    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 16 min
    Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions

    Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions

    Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.


    Source:
    https://arxiv.org/abs/2210.10860


    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 16 min
    Low-Stakes Alignment

    Low-Stakes Alignment

    Right now I’m working on finding a good objective to optimize with ML, rather than trying to make sure our models are robustly optimizing that objective. (This is roughly “outer alignment.”) That’s pretty vague, and it’s not obvious whether “find a good objective” is a meaningful goal rather than being inherently confused or sweeping key distinctions under the rug. So I like to focus on a more precise special case of alignment: solve alignment when decisions are “low stakes.” I think this case effectively isolates the problem of “find a good objective” from the problem of ensuring robustness and is precise enough to focus on productively. In this post I’ll describe what I mean by the low-stakes setting, why I think it isolates this subproblem, why I want to isolate this subproblem, and why I think that it’s valuable to work on crisp subproblems. 
    Source:
    https://www.alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment
    Narrated for AI Safety Fundamentals by TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 13 min
    Imitative Generalisation (AKA ‘Learning the Prior’)

    Imitative Generalisation (AKA ‘Learning the Prior’)

    This post tries to explain a simplified version of Paul Christiano’s mechanism introduced here, (referred to there as ‘Learning the Prior’) and explain why a mechanism like this potentially addresses some of the safety problems with naïve approaches. First we’ll go through a simple example in a familiar domain, then explain the problems with the example. Then I’ll discuss the open questions for making Imitative Generalization actually work, and the connection with the Microscope AI idea. A more detailed explanation of exactly what the training objective is (with diagrams), and the correspondence with Bayesian inference, are in the appendix.


    Source:
    https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1


    Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
    ---
    A podcast by BlueDot Impact.

    Learn more on the AI Safety Fundamentals website.

    • 18 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Lex Fridman Podcast
Lex Fridman
TED Radio Hour
NPR
Darknet Diaries
Jack Rhysider