2,000 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library The Nonlinear Fund

    • Education
    • 4.6 • 7 Ratings

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    LW - End Single Family Zoning by Overturning Euclid V Ambler by Maxwell Tabarrok

    LW - End Single Family Zoning by Overturning Euclid V Ambler by Maxwell Tabarrok

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: End Single Family Zoning by Overturning Euclid V Ambler, published by Maxwell Tabarrok on July 27, 2024 on LessWrong.
    On 75 percent or more of the residential land in most major American cities
    it is illegal to build anything other than a detached single-family home.
    95.8 percent of total residential land area in California is zoned as single-family-only, which is 30 percent of all land in the state. Restrictive zoning regulations such as these probably lower GDP per capita in the US by
    8
    36%. That's potentially tens of thousands of dollars per person.
    Map of land use in San Jose, California. Pink is single family only (94%)
    The legal authority behind all of these zoning rules derives from a 1926 Supreme Court decision in
    Village of Euclid v. Ambler Realty Co. Ambler realty held 68 acres of land in the town of Euclid, Ohio. The town, wanting to avoid influence, immigration, and industry from nearby Cleveland, passed a restrictive zoning ordinance which prevented Ambler realty from building anything but single family homes on much of their land, though they weren't attempting to build anything at the time of the case.
    Ambler realty and their lawyer (
    a prominent Georgist!) argued that since this zoning ordinance severely restricted the possible uses for their property and its value, forcing the ordinance upon them without compensation was unconstitutional.
    The constitutionality claims in this case are about the 14th and 5th amendment. The
    5th amendment to the United States Constitution states, among other things, that "private property [shall not] be taken for public use, without just compensation." The part of the 14th amendment relevant to this case just applies the 5th to state and local governments.
    There are two lines of argument in the case. First is whether the restrictions imposed by Euclid's zoning ordinance constitute "taking" private property at all. If they are taking, then the 5th amendment would apply, e.g when the govt takes land via eminent domain, they need to compensate property owners. However, even government interventions that do take don't always have to offer compensation.
    If the government, say, requires you to have an external staircase for fire egress, they don't have to compensate you because it protects "health, safety, and welfare" which is a "
    police powers" carveout from the takings clause of the 5th amendment. The other line of argument in the case is that zoning ordinances, while they do take from property owners, do not require compensation because they are part of this police power.
    Police Power
    Let's start with that second question: whether zoning laws count as protecting health and safety through the police power or are takings that require compensation. A common rhetorical technique is to reach for the most extreme case of zoning: a coal powered steel foundry wants to open up right next to the pre-school, for example.
    Conceding that this hypothetical is a legitimate use of the police power does not decide the case, however, because Euclid's zoning ordinance goes much further than separating noxious industry from schoolyards.
    The entire area of the village is divided by the ordinance into six classes of use districts, U-1 to U-6; three classes of height districts, H-1 to H-3, and four classes of area districts, A-1 to A-4.
    U-1 is restricted to single family dwellings, public parks, water towers and reservoirs, suburban and interurban electric railway passenger stations and rights of way, and farming, noncommercial greenhouse nurseries and truck gardening;
    U-2 is extended to include two-family dwellings;
    U-3 is further extended to include apartment houses, hotels, churches, schools, public libraries, museums, private clubs, community center buildings, hospitals, sanitariums, public playgrounds and recreation buildings, and a city ha

    • 11 min
    EA - Data from the 2023 EA Forum user survey by Sarah Cheng

    EA - Data from the 2023 EA Forum user survey by Sarah Cheng

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data from the 2023 EA Forum user survey, published by Sarah Cheng on July 26, 2024 on The Effective Altruism Forum.
    The purpose of this post is to share data from a survey that the EA Forum team ran last year. Though we used this survey as one of many sources of information for internal analyses, I did not include any particular takeaways from this survey data in this post. I leave that as an exercise for the reader.
    Overview
    In August 2023, the EA Forum team ran a survey to learn more about how people use the Forum and how the Forum impacted them. We got 609 valid responses.
    Thank you to everyone who responded - we really appreciate you taking the time. The results have been important for helping us understand the ways that the Forum creates value and disvalue that are otherwise hard for us to track. We've used it to evaluate the impact of the Forum and the marginal impact of our work, update our team's strategy, and prioritize the work we've done in the past 12 months.
    The person who ran the survey and wrote up the analysis is no longer at CEA, but I figured people might be interested in the results of the survey, so I'm sharing some of the data in this post. Most of the information here comes from that internal analysis, but when I use "I" that is me (Sarah) editorializing.
    This post is not comprehensive, and does not include all relevant data. I did not spend time double checking any of the information from that analysis. We plan to run another (updated) survey soon for 2024.
    Some Forum usage data, for context
    The Forum had 4.5k monthly active and 13.7k annually active logged in users in the 12 months ending on Sept 4 2023. We estimate that the total number of users was closer to 20-30k (since about 50% of traffic is logged out).
    Here's a breakdown of usage data for logged-in users in those 12 months:
    13.7k distinct logged in users
    8.5k users with 3 distinct days of activity
    5.7k users with 10 distinct days of activity
    3.1k users who were active during of all months
    1.7k users who were active during of all weeks
    388 users who were active during on of all days
    4.4k distinct commenters
    171 distinct post authors
    It's important to note that August 2022 - August 2023 was a fairly unusual time for EA, so while you can (and we have) used this survey data to estimate things like "the value the Forum generates per year", you might think that August 2023 - August 2024 is a more typical year, and so the data from the next survey may be more representative.
    Demographic reweighting[1]
    Rethink Priorities helped us with the data analysis, which included adjusting the raw data by weighting the responses to try to get a more representative view of the results. All charts below include both the raw and weighted[2] data.
    The weighting factors were:
    1. Whether the respondent had posted (relative to overall Forum usage)
    2. Whether the respondent had commented (relative to overall Forum usage)
    3. How frequently the respondent used the Forum (relative to overall Forum usage)
    4. The respondent's EA engagement level (relative to the Forum statistics from the 2020 EA Survey)
    5. The respondent's gender (relative to the Forum statistics from the 2020 EA Survey)
    Some effects of the reweighting:
    A significantly higher proportion of respondents have posted or commented on the Forum, relative to actual overall Forum usage, so reweighting decreases those percentages and the percentages of other actions (such as voting on karma).
    A plurality (around 45%) of respondents said they visit the Forum about 1-2 times a week. This is more frequent than the overall Forum population, so reweighting decreases things like the percentage of users who applied for a job due to the Forum, and the mean rating of "significantly changed your thinking".
    Overall, respondents tended to be more highly engaged than the

    • 14 min
    LW - How the AI safety technical landscape has changed in the last year, according to some practitioners by tlevin

    LW - How the AI safety technical landscape has changed in the last year, according to some practitioners by tlevin

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the AI safety technical landscape has changed in the last year, according to some practitioners, published by tlevin on July 26, 2024 on LessWrong.
    I asked the
    Constellation Slack channel how the technical AIS landscape has changed since I last spent substantial time in the Bay Area (September 2023), and I figured it would be useful to post this (with the permission of the contributors to either post with or without attribution). Curious if commenters agree or would propose additional changes!
    This conversation has been lightly edited to preserve anonymity.
    Me: One reason I wanted to spend a few weeks in Constellation was to sort of absorb-through-osmosis how the technical AI safety landscape has evolved since I last spent substantial time here in September 2023, but it seems more productive to just ask here "how has the technical AIS landscape evolved since September 2023?" and then have conversations armed with that knowledge.
    The flavor of this question is like, what are the technical directions and strategies people are most excited about, do we understand any major strategic considerations differently, etc -- interested both in your own updates and your perceptions of how the consensus has changed!
    Zach Stein-Perlman: Control is on the rise
    Anonymous 1: There are much better "model organisms" of various kinds of misalignment, e.g. the stuff Anthropic has published, some unpublished Redwood work, and many other things
    Neel Nanda: Sparse Autoencoders are now a really big deal in mech interp and where a lot of the top teams are focused, and I think are very promising, but have yet to conclusively prove themselves at beating baselines in a fair fight on a real world task
    Neel Nanda: Dangerous capability evals are now a major focus of labs, governments and other researchers, and there's clearer ways that technical work can directly feed into governance
    (I think this was happening somewhat pre September, but feels much more prominent now)
    Anonymous 2: Lots of people (particularly at labs/AISIs) are working on adversarial robustness against jailbreaks, in part because of RSP commitments/commercial motivations. I think there's more of this than there was in September.
    Anonymous 1: Anthropic and GDM are both making IMO very sincere and reasonable efforts to plan for how they'll make safety cases for powerful AI.
    Anonymous 1: In general, there's substantially more discussion of safety cases
    Anonymous 2: Since September, a bunch of many-author scalable oversight papers have been published, e.g. this, this, this. I haven't been following this work closely enough to have a sense of what update one should make from this, and I've heard rumors of unsuccessful scalable oversight experiments that never saw the light of day, which further muddies things
    Anonymous 3: My impression is that infosec flavoured things are a top ~3 priority area a few more people in Constellation than last year (maybe twice as many people as last year??).
    Building cyberevals and practically securing model weights at frontier labs seem to be the main project areas people are excited about (followed by various kinds of threat modelling and security standards).
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 3 min
    LW - Index of rationalist groups in the Bay July 2024 by Lucie Philippon

    LW - Index of rationalist groups in the Bay July 2024 by Lucie Philippon

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Index of rationalist groups in the Bay July 2024, published by Lucie Philippon on July 26, 2024 on LessWrong.
    The Bay Area rationalist community has an entry problem! Lots of listed groups are dead, the last centralized index disappeared, communication moved to private discord and slacks. This is bad, so we're making a new index, hopefully up to date and as complete as we can!
    Communication
    Discord: Bay Area Rationalists: https://discord.gg/EpG4xUVKtf
    Email Group: BayAreaLessWrong: https://groups.google.com/g/bayarealesswrong
    Local Meetup Groups
    Taco Tuesday: by Austin Chen, founder emeritus of Manifold. Check his Manifold questions page for the next date!
    North Oakland LessWrong Meetup: every Wednesday, hosted by @Czynski.
    Thursday Dinners in Berkeley: Advertised on the Discord server and Google group, alternating between a few restaurants on the northwest side of UC campus.
    Bay Area ACX Meetups: For the ACX everywhere meetups twice per year, and some other sporadic events.
    Housing
    To find spots in group houses, temporary or long term, you can use the Bay Area EA/Rationality Housing Board. The EA Houses spreadsheet also has some entries in the Bay.
    It probably works best to ask people in the Bay if they know of housing opportunities, as lots of housing is provided peer-to-peer.
    EA
    If you want to discover the EA community, the EA's Guide to Berkeley and The Bay Area is a good resource.
    Events sometimes get advertised on those websites:
    SF Bay Area EA calendar on Luma
    East Bay EA Hangout on Facebook
    AI Safety
    There are two AI safety coworking spaces in Berkeley. They sometime accept visitors, so you can try reaching out or applying via their website:
    FAR Labs
    Constellation
    Most AI Safety events don't get advertised publicly, so get in contact with people in the community to know what's happening.
    We probably missed some other meetups and communities which are public and still active, so feel free to list them in the comments!
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 2 min
    EA - My Experience as a Full-Time EA Community Builder in NYC by Alex R Kaplan

    EA - My Experience as a Full-Time EA Community Builder in NYC by Alex R Kaplan

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Experience as a Full-Time EA Community Builder in NYC, published by Alex R Kaplan on July 26, 2024 on The Effective Altruism Forum.
    Some rewards and challenges of working as an EA NYC community builder over the past two years
    Motivations
    I wanted to share these thoughts for a few reasons:
    1. I hope this serves as a reference point for people considering careers in EA community building (though this is only one reference point, of course).
    2. EA NYC is hiring an Executive Director! If you're interested, apply here by the end of July 28th (Eastern Time).
    3. I think there's often value in people discussing their jobs.[1]
    By the way, I have a few relevant disclaimers/caveats that I encourage you to check out at the bottom of this post. For now, I'd just like to say that I'm writing this in a personal capacity and do not claim to represent the views of any of my employers, past, present, or future.
    Otherwise, I may occasionally update this post to correct any misleading/inaccurate points. Please let me know if you catch anything that seems off!
    Lastly, many thanks to those who encouraged me to share this post and especially to Elliot Teperman for sharing some helpful thoughts on an earlier draft. Of course, all mistakes are my own.
    Summary
    From July 2022 to July 2024 (ongoing at the time of writing), I will have supported Effective Altruism NYC (EA NYC). In this position, I worked closely with the Centre for Effective Altruism's (CEA) Community Building Grants (CBG) program, which provided funding and support for my position.
    This work has been pretty great for me! Like anything, there have been some bumps in the road, but I think it has been broadly good. I imagine many cases where I would recommend EA community building and/or CBG program participation for specific individuals. However, I would also like to give some disclaimers about things I wish I had known beforehand.
    Given my uncertainty at various points throughout the past two years, I've questioned whether taking the role was the right move… However, if I had known two years ago what I know now, I would have felt a lot more confident in my decision![2]
    Here's an outline of my considerations:
    Some good things
    I built a lot of skills
    I made a lot of connections
    (Perhaps aided by the other benefits) I got access to more opportunities
    (Definitely aided by the other benefits) I built up my confidence quite a bit
    Some mixed things
    I felt my compensation was fair, but that might be specific to me
    Personal career planning was complicated, but that helped me design a new path for myself
    Working at a small (EA) organization has had some pretty straightforward pros and cons
    Diving deep into EA has been stressful at times, but I now feel better because of it
    I also left a lot out! Feel free to reach out and/or add (anonymous) comments if you have any thoughts and/or questions (though I imagine I may only sometimes be able to answer/help with some things).
    Context
    CBG program participation
    According to CEA's website, the Community Building Grants (CBG) program aims to build flourishing communities of individuals who work to maximize their impact using critical reasoning and evidence.
    I've benefited from the following kinds of support from the program:
    Personal grant funding (which constituted my salary)
    Network with fellow CBG community builders
    Including in-person retreats and online community
    1-on-1 support from the program's manager
    I did not leverage this support much, but I received significant support from the first two points mentioned.[3]
    One point to clarify: the Community Building Grants program is one way to fund EA community-building work. One could also do EA community building through various other models.
    EA NYC context
    EA NYC started as a small meetup group in 2013. Since then, it has grown into a commun

    • 23 min
    LW - Universal Basic Income and Poverty by Eliezer Yudkowsky

    LW - Universal Basic Income and Poverty by Eliezer Yudkowsky

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Universal Basic Income and Poverty, published by Eliezer Yudkowsky on July 26, 2024 on LessWrong.
    (Crossposted from Twitter)
    I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.
    Some of my friends reply, "What do you mean, poverty is still around? 'Poor' people today, in Western countries, have a lot to legitimately be miserable about, don't get me wrong; but they also have amounts of clothing and fabric that only rich merchants could afford a thousand years ago; they often own more than one pair of shoes; why, they even have cellphones, as not even an emperor of the olden days could have had at any price.
    They're relatively poor, sure, and they have a lot of things to be legitimately sad about.
    But in what sense is almost-anyone in a high-tech country 'poor' by the standards of a thousand years earlier? Maybe UBI works the same way; maybe some people are still comparing themselves to the Joneses, and consider themselves relatively poverty-stricken, and in fact have many things to be sad about; but their actual lives are much wealthier and better, such that poor people today would hardly recognize them.
    UBI is still worth doing, if that's the result; even if, afterwards, many people still self-identify as 'poor'."
    Or to sum up their answer: "What do you mean, humanity's 100-fold productivity increase, since the days of agriculture, has managed not to eliminate poverty? What people a thousand years ago used to call 'poverty' has essentially disappeared in the high-tech countries. 'Poor' people no longer starve in winter when their farm's food storage runs out.
    There's still something we call 'poverty' but that's just because 'poverty' is a moving target, not because there's some real and puzzlingly persistent form of misery that resisted all economic growth, and would also resist redistribution via UBI."
    And this is a sensible question; but let me try out a new answer to it.
    Consider the imaginary society of Anoxistan, in which every citizen who can't afford better lives in a government-provided 1,000 square-meter apartment; which the government can afford to provide as a fallback, because building skyscrapers is legal in Anoxistan. Anoxistan has free high-quality food (not fast food made of mostly seed oils) available to every citizen, if anyone ever runs out of money to pay for better.
    Cities offer free public transit including self-driving cars; Anoxistan has averted that part of the specter of modern poverty in our own world, which is somebody's car constantly breaking down (that they need to get to work and their children's school).
    As measured on our own scale, everyone in Anoxistan has enough healthy food, enough living space, heat in winter and cold in summer, huge closets full of clothing, and potable water from faucets at a price that most people don't bother tracking.
    Is it possible that most people in Anoxistan are poor?
    My (quite sensible and reasonable) friends, I think, on encountering this initial segment of this parable, mentally autocomplete it with the possibility that maybe there's some billionaires in Anoxistan whose frequently televised mansions make everyone else feel poor, because most people only have 1,000-meter houses.
    But actually this story is has a completely different twist! You see, I only spoke of food, clothing, housing, water, transit, heat and A/C. I didn't say whether everyone in Anoxistan had enough air to breathe.
    In Anoxistan, you see, the planetary atmosphere is mostly carbon dioxide, and breathable oxygen (O2) is a precious commodity. Almost everyone has to wear respirators at all times; only the 1% can afford to have a whole house full of breathable air, with some oxygen leaking away despite

    • 13 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Top Podcasts In Education

The Jefferson Fisher Podcast
Civility Media
The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Jamie Kern Lima Show
Jamie Kern Lima
This Is Purdue
Purdue University
TED Talks Daily
TED

You Might Also Like

LessWrong (30+ Karma)
LessWrong
Dwarkesh Podcast
Dwarkesh Patel
"Econ 102" with Noah Smith and Erik Torenberg
Noah Smith, Erik Torenberg
Last Week in AI
Skynet Today
Politix
Politix
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery