1,999 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library The Nonlinear Fund

    • Education
    • 4.6 • 7 Ratings

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - Shallow overview of institutional plant-based meal campaigns in the US & Western Europe by Neil Dullaghan

    EA - Shallow overview of institutional plant-based meal campaigns in the US & Western Europe by Neil Dullaghan

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow overview of institutional plant-based meal campaigns in the US & Western Europe, published by Neil Dullaghan on May 9, 2024 on The Effective Altruism Forum.
    This Rethink Priorities report provides a shallow overview of the potential for impactful opportunities from institutional plant-based meal campaigns in the US, France, Germany, UK, Spain, and Italy based on reviewing existing research and speaking with organizations conducting such campaigns.
    Shallow overviews are mainly intended as a quick low-confidence writeup for internal audiences and are not optimized for public consumption.The views expressed herein are not necessarily endorsed by the organizations who were interviewed.
    Main takeaways from the report include:
    Emphasize reducing all animal products in order to avoid substitution from beef & lamb to chicken, seafood, & eggs which require more animals to be harmed.
    [Confidence: Medium-High. There are many examples of programs that have had this problem (
    Hughes, 2020, 2:12:50,
    Gravert & Kurz 2021,
    Lagasse & Neff 2010). There are some examples of the problem being mitigated (
    Jalil et al. 2023, Cool Food Pledge
    2022,
    2021) but we don't yet have a systematic review and meta-analysis on which policies have the best and worst of these effects.]
    Most large schools & universities in the US, France, & Germany offer regular meatless meal options, reducing the scope for impact at scale there from further similar changes.
    [Confidence: High. We spent more than 40 hours reviewing policies at the largest institutions. While confidence could be increased by reaching out directly to institutions and verifying, the second-hand sources we used seem trustworthy.]
    More studies needed to confirm the scale of the potential opportunities for meatless meal campaigns in Italy, Spain, and the UK where existing options are more limited.
    [Confidence: Medium-Low. It appears that classroom offerings of meatless meals in Italy, Spain, and the UK are far less widespread. However, we spent less time researching these countries, due to external constraints, so there's potential that we missed important information that would reduce the potential scale of impact here. We think replicating studies like
    Essere Animali 2024 &
    Ottonova 2022 would shed light on this.]
    There may be cost-competitive opportunities in Europe, but it's likely they are relatively few and hit diminishing returns quickly.
    [Confidence: Medium. The most rigorous study of such campaigns in the US are 4-5 years old, but they indicate a cost-effectiveness of 0.4-2.5 animals spared per $ spent (without accounting for impacts on dairy, eggs, & shrimp) and that the campaign quickly stopped getting large wins at low cost.
    Our rough BOTECs of a sample of current campaigns in the US and large Western European countries estimated their campaigns' impact to range from 1.5-18 animals spared per $ spent (including dairy, eggs, & shrimp). However, these estimates are likely biased towards more positive examples and exclude costs needed to maintain policies over time so shouldn't be taken as the average expected impact.]
    Campaigns for stronger changes (like plant-based defaults and large % reduction targets) are not yet targeting and winning large-scale opportunities.
    [Confidence: High. We did not find evidence of successful campaigns at the scale that have been achieved for daily meatless option campaigns. The largest successes of this kind we know of are a plant-based default in NYC
    hospitals, but many campaigns are focused opportunistically where receptive contacts exist and on smaller targets due to a view that tractability is lower.]
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 3 min
    EA - AI stocks could crash. And this could have implications for AI safety. by Benjamin Todd

    EA - AI stocks could crash. And this could have implications for AI safety. by Benjamin Todd

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI stocks could crash. And this could have implications for AI safety., published by Benjamin Todd on May 9, 2024 on The Effective Altruism Forum.
    Just as the 2022 crypto crash had many downstream effects for effective altruism, so could a future crash in AI stocks have several negative (though hopefully less severe) effects on AI safety.
    Why might AI stocks crash?
    The most obvious reason AI stocks might crash is that stocks often crash.
    Nvidia's price fell 60% just in 2022, along with other AI companies. It also fell more than 50% in 2020 at the start of the COVID outbreak, and in 2018. So, we should expect there's a good chance it falls 50% again in the coming years.
    Nvidia's volatility is about 60%, which means - even assuming efficient markets - it has about a 15% chance of falling more than 50% in a year.1
    And more speculatively, booms and busts seem more likely for stocks that have gone up a ton, and when new technologies are being introduced.
    That's what we saw with the introduction of the internet and the dot com bubble, as well as with crypto.2
    (Here are two attempts to construct economic models for why. This phenomenon also seems related to the existence of momentum in financial prices, as well as bubbles in general.)
    Further, as I argued, current spending on AI chips requires revenues from AI software to reach hundreds of billions within a couple of years, and (at current trends) approach a trillion by 2030. There's plenty of scope to not hit that trajectory, which could cause a sell off.
    Note the question isn't just whether the current and next generation of AI models are useful (they definitely are), but rather:
    Are they so useful their value can be measured in the trillions?
    Do they have a viable business model that lets them capture enough of that value?
    Will they get there fast enough relative to market expectations?
    My own take is that the market is still underpricing the long term impact of AI (which is why I about half my equity exposure is in AI companies, especially chip makers), and I also think it's quite plausible that AI software will be generating more than a trillion dollars of revenue by 2030.
    But it also seems like there's a good chance that short-term deployment isn't this fast, and the market gets disappointed on the way. If AI revenues merely failed to double in a year, that could be enough to prompt a sell off.
    I think this could happen even if capabilities keep advancing (e.g. maybe because real world deployment is slow), though a slow down in AI capabilities and new "AI winter" would also most likely to cause a crash.
    A crash could also be caused by a broader economic recession, rise in interest rates, or anything that causes investors to become more risk-averse - like a crash elsewhere in the market or geopolitical issue.
    The end of stock bubbles often have no obvious trigger. At some point, the stock of buyers gets depleted, prices start to move down, and that causes others to sell, and so on.
    Why does this matter?
    A crash in AI stocks could cause a modest lengthening of AI timelines, by reducing investment capital. For example, startups that aren't yet generating revenue could find it hard to raise from VCs and fail.
    A crash in AI stocks (depending on its cause) might also tell us that market expectations for the near-term deployment of AI have declined.
    This means it's important to take the possibility of a crash into account when forecasting AI, and in particular to be cautious about extrapolating growth rates in investment from the last year or so indefinitely forward.
    Perhaps more importantly, just like the 2022 crypto crash, an AI crash could have implications for people working on AI safety.
    First, the wealth of many donors to AI safety is pretty correlated with AI stocks. For instance as far as I can tell Good Ventures sti

    • 6 min
    AF - Visualizing neural network planning by Nevan Wichers

    AF - Visualizing neural network planning by Nevan Wichers

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visualizing neural network planning, published by Nevan Wichers on May 9, 2024 on The AI Alignment Forum.
    TLDR
    We develop a technique to try and detect if a NN is doing planning internally. We apply the decoder to the intermediate representations of the network to see if it's representing the states it's planning through internally. We successfully reveal intermediate states in a simple Game of Life model, but find no evidence of planning in an AlphaZero chess model.
    We think the idea won't work in its current state for real world NNs because they use higher-level, abstract representations for planning that our current technique cannot decode. Please comment if you have ideas that may work for detecting more abstract ways the NN could be planning.
    Idea and motivation
    To make safe ML, it's important to know if the network is performing mesa optimization, and if so, what optimization process it's using. In this post, I'll focus on a particular form of mesa optimization: internal planning. This involves the model searching through possible future states and selecting the ones that best satisfy an internal goal. If the network is doing internal planning, then it's important the goal it's planning for is aligned with human values.
    An interpretability technique which could identify what states it's searching through would be very useful for safety. If the NN is doing planning it might represent the states it's considering in that plan. For example, if predicting the next move in chess, it may represent possible moves it's considering in its hidden representations.
    We assume that NN is given the representation of the environment as input and that the first layer of the NN encodes the information into a hidden representation. Then the network has hidden layers and finally a decoder to compute the final output. The encoder and decoder are trained as an autoencoder, so the decoder can reconstruct the environment state from the encoder output. Language models are an example of this where the encoder is the embedding lookup.
    Our hypothesis is that the NN may use the same representation format for states it's considering in its plan as it does for the encoder's output. Our idea is to apply the decoder to the hidden representations at different layers to decode them. If our hypothesis is correct, this will recover the states it considers in its plan. This is similar to the Logit Lens for LLMs, but we're applying it here to investigate mesa-optimization.
    A potential pitfall is that the NN uses a slightly different representation for the states it considers during planning than for the encoder output. In this case, the decoder won't be able to reconstruct the environment state it's considering very well. To overcome this, we train the decoder to output realistic looking environment states given the hidden representations by training it like the generator in a GAN.
    Note that the decoder isn't trained on ground truth environment states, because we don't know which states the NN is considering in its plan.
    Game of Life proof of concept (code)
    We consider an NN trained to predict the number of living cells after the Nth time step of the Game of Life (GoL). We chose the GoL because it has simple rules, and the NN will probably have to predict the intermediate states to get the final cell count. This NN won't do planning, but it may represent the intermediate states of the GoL in its hidden states.
    We use an LSTM architecture with an encoder to encode the initial GoL state, and a "count cells NN" to output the number of living cells after the final LSTM output. Note that training the NN to predict the number of alive cells at the final state makes this more difficult for our method than training the network to predict the final state since it's less obvious that the network will predict t

    • 8 min
    LW - some thoughts on LessOnline by Raemon

    LW - some thoughts on LessOnline by Raemon

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: some thoughts on LessOnline, published by Raemon on May 9, 2024 on LessWrong.
    I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too.
    I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)".
    I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into:
    The vibe: preserving cozy/spaciousness of a small retreat at a larger festival
    The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers."
    Manifest, and Summer Camp
    ...
    I. The Vibe
    I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest."
    But a lot of people haven't been to a CFAR Reunion or to Manifest.
    I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either.
    Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with.
    Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority.
    Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!)
    (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks")
    ...
    Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency - lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing.
    Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook"
    ...
    We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery.
    ...
    Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly.
    ...
    ...
    ...
    II. The Audience
    So that was a lot of words about The Vibe. The second question is "who a

    • 7 min
    LW - Dating Roundup #3: Third Time's the Charm by Zvi

    LW - Dating Roundup #3: Third Time's the Charm by Zvi

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dating Roundup #3: Third Time's the Charm, published by Zvi on May 9, 2024 on LessWrong.
    The first speculated on why you're still single. We failed to settle the issue. A lot of you were indeed still single. So the debate continues.
    The second gave more potential reasons, starting with the suspicion that you are not even trying, and also many ways you are likely trying wrong.
    The definition of insanity is trying the same thing over again expecting different results. Another definition of insanity is dating in 2024. Can't quit now.
    You're Single Because Dating Apps Keep Getting Worse
    A guide to taking the perfect dating app photo. This area of your life is important, so if you intend to take dating apps seriously then you should take photo optimization seriously, and of course you can then also use the photos for other things.
    I love the 'possibly' evil here.
    Misha Gurevich: possibly evil idea: Dating app that trawls social media and websites and creates a database of individuals regardless of if they opt in or not, including as many photos and contact information as can be found.
    Obviously this would be kind of a privacy violation and a lot of people would hate it.
    but I imagine a solid subset of singles who are lonely but HATE the app experience would be grateful to be found this way.
    No big deal, all we are doing is taking all the data about private citizens on the web and presenting it to any stranger who wants it in easy form as if you might want to date them. Or stalk them. Or do anything else, really.
    And you thought AI training data was getting out of hand before.
    All right, so let's consider the good, or at least not obviously evil, version of this.
    There is no need to fill out an intentional profile, or engage in specific actions, other than opting in. We gather all the information off the public web. We use AI to amalgamate all the data, assemble in-depth profiles and models of all the people. If it thinks there is a plausible match, then it sets it up.
    Since we are in danger of getting high on the creepiness meter, let's say the woman gets to select who gets contacted first, then if both want to match in succession you put them in contact. Ideally you'd also use AI to facilitate in various other ways, let people say what they actually want in natural language, let the AI ask follow-up questions to find potential matches or do checks first (e.g. 'I would say yes if you can confirm that he…') and so on.
    There is definitely not enough deep work being done trying to overturn the system.
    Bumble gives up its one weird trick, goes back to men messaging first.
    Melissa Chen: The evolution of Bumble:
    Sick of men inboxing women ("the patriarchy is so creepy and icky!")
    Starts dating app to reverse the natural order (women now make the first move! So empowering! So brave & stunning!)
    Women complain it's exhausting
    Reinstate the natural law
    Hardcore Siege: It's such ridiculous headline. I have never gotten an opener on Bumble besides "hey", women never actually work go start a conversation or have a good opener, they're literally just re-approving the ability of the man to start the conversation.
    Outa: Anyone that's used it would tell you that 99% of the time they would just leave a "hey" or "."
    Casey Handmer: AFAIK no one has yet made a dating app where the cost of sending messages is increased if you're a creep. This would be technologically easy to do, and would let the market solve the problem.
    Several interesting things here.
    1. Many 'women never actually initiated the conversation' responses. Women say 'hey' to bypass the requirement almost all the time. That is not obviously useless as a secondary approval, but it presumably is not worth the bother.
    2. This was among women who self-selected into the app with mandatory female openers, so yeah, women really real

    • 57 min
    EA - Potential Pitfalls in University EA Community Building by jessica mccurdy

    EA - Potential Pitfalls in University EA Community Building by jessica mccurdy

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential Pitfalls in University EA Community Building, published by jessica mccurdy on May 8, 2024 on The Effective Altruism Forum.
    TL;DR: This is a written version of a talk given at EAG Bay Area 2023. It claims university EA community building can be incredibly impactful, but there are important pitfalls to avoid, such as being overly zealous, overly open, or overly exclusionary. These pitfalls can turn away talented people and create epistemic issues in the group.
    By understanding these failure modes, focusing on truth-seeking discussions, and being intentional about group culture, university groups can expose promising students to important ideas and help them flourish.
    Introduction
    Community building at universities can be incredibly impactful, but important pitfalls can make this work less effective or even net negative. These pitfalls can turn off the kind of talented people that we want in the EA community, and it's challenging to tell if you're falling into them. This post is based on a talk I gave at EAG Bay Area in early 2023[1]. If you are a new group organizer or interested in becoming one, you might want to check out this advice post.
    This talk was made specifically for university groups, but I believe many of these pitfalls transfer to other groups. Note, that I didn't edit this post much and may not be able to respond in-depth to comments now.
    I have been in the EA university group ecosystem for almost 7 years now. While I wish I had more rigorous data and a better idea of the effect sizes, this post is based on anecdotes from years of working with group organizers. Over the past years, I think I went from being extremely encouraging of students doing university community building and selling it as a default option for students, to becoming much more aware of risks and concerns and hence writing this talk.
    I think I probably over-updated on the risks and concerns, and this led me to be less outwardly enthusiastic about the value of CB over the past year. I think that was a mistake, and I am looking forward to revitalizing the space to a happy medium. But that is a post for another day.
    Why University Community Building Can Be Impactful
    Before discussing the pitfalls, I want to emphasize that I do think community building at universities can be quite high leverage. University groups can help talented people go on to have effective careers. Students are at a time in their lives when they're thinking about their priorities and how to make a change in the world. They're making lifelong friendships. They have a flexibility that people at other life stages often lack.
    There is also some empirical evidence supporting the value of university groups. The longtermist capacity building team at Open Philanthropy ran a mass survey. One of their findings was that a significant portion of people working on projects they're excited about had attributed a lot of value to their university EA groups.
    Common Pitfalls in University Group Organizing
    While university groups can be impactful, there are several pitfalls that organizers should be aware of. In this section, I'll introduce some fictional characters that illustrate these failure modes. While the examples are simplified, I believe they capture real dynamics that can arise.
    Pitfall 1: Being Overly Zealous
    One common pitfall is being overly zealous or salesy when trying to convince others of EA ideas. This can come across as not genuinely engaging with people's arguments or concerns. Consider this example:
    Skeptical Serena asks, "Can we actually predict the downstream consequences of our actions in the long run? Doesn't that make RCTs not useful?"
    Zealous Zack[2] responds confidently, "That's a good point but even 20-year studies show this is working. There's a lot of research that has gone into it. So, it really d

    • 11 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Mick Unplugged
Mick Hunt
Law of Attraction SECRETS
Natasha Graziano
TED Talks Daily
TED
The Rich Roll Podcast
Rich Roll

You Might Also Like

LessWrong (30+ Karma)
LessWrong
80k After Hours
The 80000 Hours team
Astral Codex Ten Podcast
Jeremiah
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Last Week in AI
Skynet Today
Making Sense with Sam Harris
Sam Harris