1,258 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library: EA Forum The Nonlinear Fund

    • Education

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - Design changes and the community section (Forum update March 2023) by Lizka

    EA - Design changes and the community section (Forum update March 2023) by Lizka

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum.
    We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now.
    As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org.
    Results of the Community-Frontpage test & more thoughts on community posts
    A little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum.
    We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go.
    Outcomes
    Information we gathered
    We sourced user feedback from different places:
    User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing)
    Responses to a quick survey on how we can improve discussions on the Forum (45 responses)
    Metrics (mostly used as sanity checks)
    Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this)
    Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it)
    There are still important & useful Community posts every week (subjective assessment)(there are)
    The team’s experience with the section, and whether we thought the change was positive overall
    Outcomes and themes:
    The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome.
    And here are the results from the survey:
    The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks.
    There were, of course, some concerns, and critical or constructive feedback.
    Confusion about what “Community” means
    Not everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here.
    Concerns that important conversations would be missed
    Some people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change.
    However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of

    • 12 min
    EA - Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters by Pablo

    EA - Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters by Pablo

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum.
    Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.
    A message to our readers
    This issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year!
    Featured research
    All things Bing
    Microsoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4).
    Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is.
    Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters.
    Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer [].
    How the major AI labs are thinking about sa

    • 37 min
    EA - Where I'm at with AI risk: convinced of danger but not (yet) of doom by Amber Dawn

    EA - Where I'm at with AI risk: convinced of danger but not (yet) of doom by Amber Dawn

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum.
    [content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post]
    I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime.
    I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default.
    Why I'm writing this
    I’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation.
    I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know.
    This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view.
    Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often.
    Where I’m at with AI safety
    Tl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.)
    Convincing (to me) arguments for why AI could be dangerous
    Humans could use AI to do bad things more effectively
    For example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate

    • 10 min
    EA - Estimation for sanity checks by NunoSempere

    EA - Estimation for sanity checks by NunoSempere

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum.
    I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples.
    Why I like this so much
    I like this so much because:
    It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching b******t, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument.
    It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions.
    They are useful, as we will see below.
    Some examples
    Here are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life.
    Photo Patch Foundation
    The Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy:
    Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years.
    It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days.
    One can then do somewhat more elaborate estimations about criminal justice reform.
    Sanity-checking that supply chain accountability has enough scale
    At some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021.
    So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here.
    Sanity checking the cost-effectiveness of the EA Wiki
    In my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been

    • 6 min
    EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope

    EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum.
    Note: manually cross-posted from LessWrong. See here for discussion on LW.
    Introduction
    I recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.
    Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.
    As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question.
    I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems.
    I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less:
    I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at.
    and more:
    Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm.
    My objections
    Will current approaches scale to AGI?
    Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter.
    I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as:
    Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly.
    Teaching neural networks to directly modify themselves by giving them edit access to their own weights.
    Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves.
    Using program search to find more efficient optimizers.
    Using simulated evolution to find more efficient architectures.
    Using efficient second-order corrections to gradient descent's approximate optimization process.
    Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks.
    Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers.
    Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task.
    Having language models

    • 52 min
    EA - Forecasts on Moore v Harper from Samotsvety by gregjustice

    EA - Forecasts on Moore v Harper from Samotsvety by gregjustice

    Link to original article

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum.
    [edited to include full text]
    Disclaimers
    The probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming.
    The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in.
    You can subscribe to these posts here.
    Introduction
    The Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering.
    Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future.
    Background
    What is the independent state legislature theory?
    Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1
    The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading:
    The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions.
    Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them.
    Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3
    The

    • 49 min

Top Podcasts In Education

Mel Robbins
Dr. Jordan B. Peterson
iHeartPodcasts
Duolingo
Leo Skepi
Rich Roll

You Might Also Like