1,999 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library The Nonlinear Fund

    • Education
    • 4.6 • 7 Ratings

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - In DC, a new wave of AI lobbyists gains the upper hand by Chris Leong

    EA - In DC, a new wave of AI lobbyists gains the upper hand by Chris Leong

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In DC, a new wave of AI lobbyists gains the upper hand, published by Chris Leong on May 13, 2024 on The Effective Altruism Forum.
    "The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America's AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology.
    ... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.
    ... Last year, Rep. Ted Lieu (D-Calif.) declared himself "freaked out" by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he's unconvinced by claims that Congress must crack down on advanced AI.
    "If you just say, 'We're scared of frontier models' - okay, maybe we should be scared," Lieu told POLITICO. "But I would need something beyond that to do legislation. I would need to know what is the threat or the harm that we're trying to stop."
    ... After months of conversations with IBM and its allies, Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, says more lawmakers are now openly questioning whether advanced AI models are really that dangerous.
    In an April interview, Obernolte called it "the wrong path" for Washington to require licenses for frontier AI. And he said skepticism of that approach seems to be spreading.
    "I think the people I serve with are much more realistic now about the fact that AI - I mean, it has very consequential negative impacts, potentially, but those do not include an army of evil robots rising up to take over the world," said Obernolte."
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 2 min
    EA - Impact Accelerator Program for EA Professionals by High Impact Professionals

    EA - Impact Accelerator Program for EA Professionals by High Impact Professionals

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impact Accelerator Program for EA Professionals, published by High Impact Professionals on May 13, 2024 on The Effective Altruism Forum.
    High Impact Professionals is excited to announce that applications are now open for the Summer 2024 round of our
    Impact Accelerator Program (IAP). The IAP is a 6-week program designed to equip experienced EA-aligned professionals (not currently working at an EA organization) with the knowledge and tools necessary to make a meaningful impact and empower them to start taking actionable steps right away.
    To date, the program has been a success, with several alumni having already changed careers to new impactful roles and nearly all participants planning to do the same in the next ~6 months. IAP alumni are also volunteering an average of ~100 hours at EA-aligned orgs/projects and donating on average more than 7% of their annual salary to effective charities. We are currently running the Spring 2024 round, which features a larger number of participants and cohorts.
    We're pleased to open up this new Summer 2024 program round, which will start the week of July 15. More information is available below and
    here. Please
    apply here by May 23.
    Program Objectives
    The IAP is set up to help participants:
    identify paths to impact,
    take concrete, impactful actions, and
    join a network of like-minded, experienced, and supportive EA professionals.
    At the end of the program, a participant should have a good answer to the question "How can I have the most impact with my career, and what are my next steps?", and they should have taken the first steps in that direction.
    Program Overview
    Important Dates
    Deadline to apply: May 23
    Apply here
    Program duration: 6 weeks (week of July 15 - week of August 19, 2024)
    Format
    Weekly individual work (2-3 hours). A mix of:
    Learning: Reading resources on how to think about and prioritize different options for impact
    Doing: Taking impactful actions, such as developing your own personalized impact plan and taking concrete steps to begin implementing it
    Virtual group sessions (1.5 hours of discussions and coaching)
    Includes mastermind sessions where each member of the cohort has the opportunity to present their plans, obstacles, and uncertainties and get in-depth, tailored feedback
    1-on-1 sessions with IAP facilitators
    Extracurricular sessions: The possibility of extra sessions on topics defined by the needs of the cohort (e.g., financial considerations, networking)
    Post-program support sessions: Access additional group sessions in the months following the program to maintain momentum and continue implementing your plan
    Topics covered
    Values and mission - determine your motivations / guiding principles, strengths, and weaknesses to set a clear starting point for your journey
    Paths to impact for professionals - explore the landscape of career possibilities, address your key uncertainties, and identify your best career and impact options
    Develop an action plan - put all you've discovered into a roadmap with actionable steps to guide you to your impact goals
    Implement your plan - Begin taking active steps to turn your plan into an impactful reality
    Led by: Select members of the High Impact Professionals team/community
    Why Should You Apply?
    Overcome Barriers: We'll guide you in exploring your personal obstacles to impact and assist you in taking real-world, impactful actions.
    Understand Impact: Delve into the complex nature of creating positive change and discover the diverse opportunities available to professionals.
    Develop and Implement a Solid Impact Plan: Acquire tools to assess and plan for impact, and integrate them into your own circumstances to create a personal roadmap for maximizing your positive influence. Then, begin taking concrete steps to put your plan into action.
    Connect with Like-Minded People: Embark on this j

    • 4 min
    EA - "Cool Things Our GHW Grantees Have Done in 2023" - Open Philanthropy by Lizka

    EA - "Cool Things Our GHW Grantees Have Done in 2023" - Open Philanthropy by Lizka

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Cool Things Our GHW Grantees Have Done in 2023" - Open Philanthropy, published by Lizka on May 13, 2024 on The Effective Altruism Forum.
    Open Philanthropy[1] recently shared a blog post with a list of some cool things accomplished in 2023 by grantees of their Global Health and Wellbeing (GHW) programs (including farm animal welfare). The post "aims to highlight just a few updates on what our grantees accomplished in 2023, to showcase their impact and make [OP's] work a little more tangible."
    I'm link-posting because I found it valuable to read about these projects, several of which I hadn't heard of. And I like that despite its brevity, the post manages to include a lot of relevant information (and links), along with explanations of the key relevant theories of change and opportunity.
    For people who don't want to click through to the post itself, I'm including an overview of what's included and a selection of excerpts below.
    Overview
    The post introduces each program with a little blurb, and then provides 1-2 examples of projects and one of their updates from 2023.
    Here's the table of contents:
    1. Global Public Health Policy
    1. Dr. Sachchida Tripathi (air quality sensors)
    2. Lead Exposure Elimination Project (LEEP)
    2. Global Health R&D
    1. Cures Within Reach
    2. SAVAC
    3. Scientific Research
    1. Dr. Caitlin Howell (catheters)
    2. Dr. Allan Basbaum (pain research)
    4. Land Use Reform
    1. Sightline Institute
    5. Innovation Policy
    1. Institute for Progress
    2. Institute for Replication
    6. Farm Animal Welfare
    1. Open Wing Alliance
    2. Aquaculture Stewardship Council
    7. Global Aid Policy
    1. PoliPoli
    8. Effective Altruism (Global Health and Wellbeing)
    1. Charity Entrepreneurship
    9. How you can support our grantees
    Examples/excerpts from the post
    I've chosen some examples (pretty arbitrarily - I'm really excited about many of the other examples, but wanted to limit myself here), and am including quotes from the original post.
    1.1 Dr. Sachchida Tripathi (air quality sensors)
    Sachchida Tripathi is a professor at IIT Kanpur, one of India's leading universities, where he focuses on civil engineering and sustainable energy.
    Dr. Tripathi used an Open Philanthropy grant to purchase 1,400 low-cost air quality sensors and place them in every block[2] in rural Uttar Pradesh and Bihar. Using low-cost sensors involved procuring and calibrating them (see photo).
    These sensors now provide much more accurate and reliable data for these rural areas than was previously available to the air quality community.
    This work has two main routes to impact. First, these sensors make the problem of rural air pollution legible. Because air quality in India is assumed to be a largely urban issue, most ground-based sensors are in urban areas. Second, proving the value of these low-cost sensors and getting operational experience can encourage buy-in from stakeholders (e.g., local governments) who may fund additional sensors or other air quality interventions.
    Air quality monitoring is a major theme of our South Asian Air Quality grantmaking. We are actively exploring opportunities in new geographic areas, both within and beyond India, without high-quality, ground-based monitoring. Santosh Harish, who leads our grantmaking on environmental health, recently spoke to the 80,000 Hours podcast about this grant as well as air quality in India more generally.
    2.2. SAVAC (accelerating the development and implementation of strep A vaccines)
    The Strep A Vaccine Global Consortium (SAVAC) is working to accelerate the development and implementation of safe and effective strep A vaccines.
    Open Philanthropy is one of very few funders supporting the development of a group A strep (GAS) vaccine (we've funded two projects to test new vaccines). GAS kills over 500,000 people per year, mostly by causing rheumatic heart disease.[3]
    Wh

    • 10 min
    EA - Notes on risk compensation by trammell

    EA - Notes on risk compensation by trammell

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on risk compensation, published by trammell on May 12, 2024 on The Effective Altruism Forum.
    Introduction
    When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously.
    The phenomenon in general is therefore sometimes known as the "Peltzman Effect", though it is more often known as "risk compensation".[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2]
    In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior.
    There's no reason why risk compensation shouldn't apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3]
    Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it "the dangerous valley problem".
    There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms.
    In a sense what they do goes well beyond this post, but as far as I'm aware none of their work dwells on what drives the logic of risk compensation even when there is only one firm, and it isn't designed to build intuition as simply as possible about when it should be expected to be a large or a small effect in general.
    So the goal of this post is to do that, using x-risk from AI as the running example. It also introduces some economic intuitions around risk compensation which I found helpful and have not quite seen spelled out before (though they don't differ much in spirit from Appendix B of Peltzman's original paper).
    Model
    An AI lab's preferences
    In this model, a deployed AI system either immediately causes an existential catastrophe or is safe. If it's safe, it increases the utility of the lab that deployed it. Referring to the event that it turns out to be safe as "survival", the expected utility of the lab is the product of two terms:
    EUlab = (the probability of survival) (the lab's utility given survival).
    That is, without loss of generality, the lab's utility level in the event of the catastrophe is denoted 0. Both terms are functions of two variables:
    some index of the resources invested in safety work, denoted S0 ("safety work"), and
    some index of how capable the AI is and/or how widely it's deployed, denoted C0 ("capabilities").
    Utility given survival
    Starting with the second term: we will say that the lab's utility given survival U(C)
    a1. increases continuously and unboundedly in C and
    a2. is independent of S. That is, given that survival was achieved, the lab does not care intrinsically about how much effort was put into safety.
    Under these assumptions, we can posit, without loss of generality, that
    U(C)=C+k
    for some (not necessarily positive) constant k. If k is positive, the peop

    • 35 min
    LW - Beware unfinished bridges by Adam Zerner

    LW - Beware unfinished bridges by Adam Zerner

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beware unfinished bridges, published by Adam Zerner on May 12, 2024 on LessWrong.
    This guy don't wanna battle, he's shook
    'Cause ain't no such things as halfway crooks
    8 Mile
    There is a commonly cited typology of cyclists where cyclists are divided into four groups:
    1. Strong & Fearless (will ride in car lanes)
    2. Enthused & Confident (will ride in unprotected bike lanes)
    3. Interested but Concerned (will ride in protected bike lanes)
    4. No Way No How (will only ride in paths away from cars)
    I came across this typology because I've been learning about urban design recently, and it's got me thinking. There's all sorts of push amongst urban designers for adding more and more bike lanes. But is doing so a good idea?
    Maybe. There are a lot factors to consider. But I think that a very important thing to keep in mind are thresholds.
    It will take me some time to explain what I mean by that. Let me begin with a concrete example.
    I live in northwest Portland. There is a beautiful, protected bike lane alongside Naito Parkway that is pretty close to my apartment.
    It basically runs along the west side of the Willamette River.
    Which is pretty awesome. I think of it as a "bike highway".
    But I have a problem: like the majority of people, I fall into the "Interested but Concerned" group and am only comfortable riding my bike in protected bike lanes. However, there aren't any protected bike lanes that will get me from my apartment to Naito Parkway. And there often aren't any protected bike lanes that will get me from Naito Parkway to my end destination.
    In practice I am somewhat flexible and will find ways to get to and from Naito Parkway (sidewalk, riding in the street, streetcar, bus), but for the sake of argument, let's just assume that there is no flexibility. Let's assume that as a type III "Interested but Concerned" bicyclist I have zero willingness to be flexible. During a bike trip, I will not mix modes of transportation, and I will never ride my bike in a car lane or in an unprotected bike lane.
    With this assumption, the beautiful bike lane alongside Naito Parkway provides me with zero value.[1]
    Why zero? Isn't that a bit extreme? Shouldn't we avoid black and white thinking? Surely it provides some value, right? No, no, and no.
    In our hypothetical situation where I am inflexible, the Naito Parkway bike lane provides me with zero value.
    1. I don't have a way of biking from my apartment to Naito Parkway.
    2. I don't have a way of biking from Naito Parkway to most of my destinations.
    If I don't have a way to get to or from Naito Parkway, I will never actually use it. And if I'm never actually using it, it's never providing me with any value.
    Let's take this even further. Suppose I start off at point A, Naito Parkway is point E, and my destination is point G. Suppose you built a protected bike lane that got me from point A to point B. In that scenario, the beautiful bike lane alongside Naito Parkway would still provide me with zero value.
    Why? I still have no way of accessing it. I can now get from point A to point B, but I still can't get from point B to point C, point C to point D, D to E, E to F, or F to G. I only receive value once I have a way of moving between each of the six sets of points:
    1. A to B
    2. B to C
    3. C to D
    4. D to E
    5. E to F
    6. F to G
    There is a threshold.
    If I can move between zero pairs of those points I receive zero value.
    If I can move between one pair of those points I receive zero value.
    If I can move between two pairs of those points I receive zero value.
    If I can move between three pairs of those points I receive zero value.
    If I can move between four pairs of those points I receive zero value.
    If I can move between five pairs of those points I receive zero value.
    If I can move between six pairs of those points I receive positive value.
    I only receiv

    • 5 min
    LW - Questions are usually too cheap by Nathan Young

    LW - Questions are usually too cheap by Nathan Young

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Questions are usually too cheap, published by Nathan Young on May 12, 2024 on LessWrong.
    It is easier to ask than to answer.
    That's my whole point.
    It is much cheaper to ask questions than answer them so beware of situations where it is implied that asking and answering are equal.
    Here are some examples:
    Let's say there is a maths game. I get a minute to ask questions. You get a minute to answer them. If you answer them all correctly, you win, if not, I do. Who will win?
    Preregister your answer.
    Okay, let's try. These questions took me roughly a minute to come up with.
    What's 56,789 * 45,387?
    What's the integral from -6 to 5π of sin(x cos^2(x))/tan(x^9) dx?
    What's the prime factorisation of 91435293173907507525437560876902107167279548147799415693153?
    Good luck. If I understand correctly, that last one's gonna take you at least an hour1 (or however long it takes to threaten me).
    Perhaps you hate maths. Let's do word problems then.
    Define the following words "antidisestablishmentarianism", "equatorial", "sanguine", "sanguinary", "escapology", "eschatology", "antideluvian", "cripuscular", "red", "meter", all the meanings of "do", and "fish".
    I don't think anyone could do this without assistance. I tried it with Claude, which plausibly still failed2 the "fish" question, though we'll return to that.
    I could do this for almost anything:
    Questions on any topic
    Certain types of procedural puzzles
    Asking for complicated explanations (we'll revisit later)
    Forecasting questions
    This is the centre of my argument
    I see many situations where questions and answers are treated as symmetric. This is rarely the case. Instead, it is much more expensive to answer than to ask.
    Let's try and find some counter examples. A calculator can solve allowable questions faster than you can type them in. A dictionary can provide allowable definitions faster than you can look them up. An LLM can sometimes answer some types of questions more cheaply in terms of inference costs than your time was worth in coming up with them.
    But then I just have to ask different questions. Calculators and dictionaries are often limited. And even the best calculation programs can't solve prime factorisation questions more cheaply than I can write them. Likewise I could create LLM prompts that are very expensive for the best LLMs to answer well, eg "write a 10,000 word story about an [animal] who experiences [emotion] in a [location]."
    How this plays out
    Let's go back to our game.
    Imagine you are sitting around and I turn up and demand to play the "answering game". Perhaps I reference on your reputation. You call yourself a 'person who knows things', surely you can answer my questions? No? Are you a coward? Looks like you are wrong!
    And now you either have to spend your time answering or suffer some kind of social cost and allow me to say "I asked him questions but he never answered". And whatever happens, you are distracted from what you were doing. Whether you were setting up an organisation or making a speech or just trying to have a nice day, now you have to focus on me. That's costly.
    This seems like a common bad feature of discourse - someone asking questions cheaply and implying that the person answering them (or who is unable to) should do so just as cheaply and so it is fair. Here are some examples of this:
    Internet debates are weaponised cheap questions. Whoever speaks first in many debates often gets to frame the discussion and ask a load of questions and then when inevitably they aren't answered, the implication is that the first speaker is right3. I don't follow American school debate closely, but I sense it is even more of this, with people literally learning to speak faster so their opponents can't process their points quickly enough to respond to them.
    Emails. Normally they exist within a framework of

    • 10 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Rich Roll Podcast
Rich Roll
TED Talks Daily
TED
Mick Unplugged
Mick Hunt
جافکری | Jafekri
Amirali Gh

You Might Also Like

LessWrong (30+ Karma)
LessWrong
80k After Hours
The 80000 Hours team
Astral Codex Ten Podcast
Jeremiah
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Last Week in AI
Skynet Today
Decoding the Gurus
Christopher Kavanagh and Matthew Browne