1,999 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library The Nonlinear Fund

    • Education
    • 4.6 • 7 Ratings

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - If You're Going To Eat Animals, Eat Beef and Dairy by Omnizoid

    EA - If You're Going To Eat Animals, Eat Beef and Dairy by Omnizoid

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If You're Going To Eat Animals, Eat Beef and Dairy, published by Omnizoid on April 23, 2024 on The Effective Altruism Forum.
    Crosspost of my blog.
    You shouldn't eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo
    cruel, hellish conditions that we'd confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger is worth that kind of cruelty. However, not all animals are the same. Contra Napoleon in Animal Farm, all animals are not equal.
    Cows are big.
    The average person eats 2400 chickens but only 11 cows in their life. That's mostly because chickens are so many times smaller than cows, so you can only get so many chicken sandwiches out of a single chicken. But how much worse is chicken than cow?
    Brian Tomasik devised a helpful
    suffering calculator chart. It has various columns - one for how sentient you think the animals are, compared to humans, one for how long the animals lives, etc. You can change the numbers around if you want. I changed the sentience numbers to accord with the results of the
    most detailed report on the subject (for the animals they didn't sample, I just compared similar animals), done by Rethink Priorities:
    When I did that, I got the following:
    Rather than, as the original chart did, setting cows = 1 for the sentience threshold, I set humans = 1 for it. So therefore you should think in terms of the suffering caused as roughly equivalent to the suffering caused if you locked a severely mentally enfeebled person or baby in a factory farm and tormented them for that number of days. Dairy turns out not that bad compared to the rest - a kg of dairy is only equivalent to torturing a baby for about 70 minutes in terms of suffering caused.
    That means if you get a gallon of milk, that's only equivalent to confining and tormenting a baby for about 4 and a half hours. That's positively humane compared to the rest!
    Now I know people will object that human suffering is much worse than animal suffering. But this is
    totally unjustified. Making a human feel pain is generally worse because we feel pain more intensely, but in this case, we're analyzing how bad a unit of pain is. If the amount of suffering is the same, it's not clear what about animals is supposed to make their suffering so monumentally unimportant.
    Their feathers? Their lack of mental acuity? We controlled for that by having the comparison be a baby or a severely mentally disabled person (babies are dumb, wholly unable to do advanced mathematics). Ultimately, thinking animal pain doesn't matter much is just unjustified speciesism, wherein one takes an obviously intrinsically morally irrelevant feature like species to determine moral worth.
    Just like racism and sexism, speciesism is wholly indefensible - it places moral significance on a totally morally insignificant category.
    Even if you reject this, the chart should still inform your eating decisions. As long as you think animal suffering is bad, the chart is informative. Some kinds of animal products cause a lot more suffering than others - you should avoid the ones that cause more suffering.
    Dairy, for instance, causes over 800 times less suffering than chicken and over 1000 times less than eggs. Drinking a gallon of milk a day for a year is then about as bad as having a chicken sandwich once every four months. Chicken is then really really bad - way worse than most other things. Dairy and beef mostly aren't a big deal in comparison. And you can play around the numbers if you disagree with them - whatever answer you come to should be informative.
    I remember seeing this chart was instrumental in my going vegan. I realized that each time I have a chicken sandwich, animals have to suffer in darkness, feces, filth, and misery for weeks on end. That's not worth a

    • 3 min
    EA - New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support? by Christina

    EA - New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support? by Christina

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support?, published by Christina on April 23, 2024 on The Effective Altruism Forum.
    I'm an international security professional with experience conducting open source analysis, satellite imagery interpretation, and independent research, and I'm launching a new consulting organization, Earthnote! I'm really interested in applying my skills to the EA community and help to reduce existential threats to humanity, so let me know if I can help you/your org!
    Fields of expertise and interest include:
    Nuclear/CBRN risk, nonproliferation, and safeguards
    Satellite imagery analysis
    Space governance
    Emerging technology
    Existential risk and longtermism
    As one example of my work, I was a consultant at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, where I led a pilot project on tech company data center monitoring using satellite imagery. Using open source optical imagery, I identified key infrastructural and geographical attributes of these sites, writing a report to explain my findings and recommend next steps for analysis.
    This data could be harnessed as a basis for developing a future understanding of compute capabilities, energy usage, and policy creation.
    I've had an extensive career working for the International Atomic Energy Agency, the US Department of Defense, the US laboratory system, Google, and academia/think tanks. I'm super excited to apply this expertise to EA-related projects and research. Please feel free to reach out with comments or inquiries any time!
    Christina Krawec
    Founder and Consultant
    Earthnote, LLC
    cikrawec@gmail.com
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 1 min
    LW - Forget Everything (Statistical Mechanics Part 1) by J Bostock

    LW - Forget Everything (Statistical Mechanics Part 1) by J Bostock

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forget Everything (Statistical Mechanics Part 1), published by J Bostock on April 23, 2024 on LessWrong.
    EDIT: I somehow missed that John Wentworth and David Lorell are also in the middle of a sequence on this same topic here. I will see where this goes from here!
    Introduction to a sequence on the statistical thermodynamics of some things and maybe eventually everything. This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine.
    The title of this post has a double meaning:
    Forget the thermodynamics you've learnt before, because statistical mechanics starts from information theory.
    The main principle of doing things with statistical mechanics is can be summed up as follows:
    Forget as much as possible, then find a way to forget some more.
    Particle(s) in a Box
    All of practical thermodynamics (chemistry, engines, etc.) relies on the same procedure, although you will rarely see it written like this:
    Take systems which we know something about
    Allow them to interact in a controlled way
    Forget as much as possible
    If we have set our systems correctly, the information that is lost will allow us to learn some information somewhere else.
    For example, consider a particle in a box.
    What does it mean to "forget everything"? One way is forgetting where the particle is, so our knowledge of the particle's position could be represented by a uniform distribution over the interior of the box.
    Now imagine we connect this box to another box:
    If we forget everything about the particle now, we should also forget which box it is in!
    If we instead have a lot of particles in our first box, we might describe it as a box full of gas. If we connect this to another box and forget where the particles are, we would expect to find half in the first box and half in the second box. This means we can explain why gases expand to fill space without reference to anything except information theory.
    A new question might be, how much have we forgotten? Our knowledge gas particle has gone from the following distribution over boxes 1 and 2
    P(Box)={1 Box 1 0 Box 2
    To the distribution
    P(Box)={0.5 Box 1 0.5 Box 2
    Which is the loss of 1 bit of information per particle. Now lets put that information to work.
    The Piston
    Imagine a box with a movable partition. The partition restricts particles to one side of the box. If the partition moves to the right, then the particles can access a larger portion of the box:
    In this case, to forget as much as possible about the particles means to assume they are in the largest possible space, which involves the partition being all the way over to the right. Of course there is the matter of forgetting where the partition is, but we can safely ignore this as long as the number of particles is large enough.
    What if we have a small number of particles on the right side of the partition?
    We might expect the partition to move some, but not all, of the way over, when we forget as much as possible. Since the region in which the pink particles can live has decreased, we have gained knowledge about their position. By coupling forgetting and learning, anything is possible. The question is, how much knowledge have we gained?
    Maths of the Piston
    Let the walls of the box be at coordinates 0 and 1, and let x be the horizontal coordinate of the piston. The position of each green particle can be expressed as a uniform distribution over (0,x), which has entropy log2(x), and likewise each pink particle's position is uniform over (x,1), giving entropy log2(1x).
    If we have ng green particles and np pink particles, the total entropy becomes nglog2(x)+nplog2(1x), which has a minimum at x=ngng+np. This means that the total volume occupied by each population of particles is proportion

    • 4 min
    EA - Should we break up Google DeepMind? by Hauke Hillebrandt

    EA - Should we break up Google DeepMind? by Hauke Hillebrandt

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we break up Google DeepMind?, published by Hauke Hillebrandt on April 23, 2024 on The Effective Altruism Forum.
    Regulators should review the 2014 DeepMind acquisition. When Google bought DeepMind in 2014, no regulator, not the FTC, not the EC's DG COMP, nor the CMA, scrutinized the impact. Why? AI startups have high value but low revenues. And so they avoid regulation (and tax, see below). Buying start-ups with low revenues flies under the thresholds of EU merger regulation[1] or the CMA's 'turnover test' (despite it being a 'relevant enterprise' under the National Security and Investment Act).
    In 2020, the FTC ordered Big Tech to provide info on M&A from 2010-2019 that it didn't report (UK regulators should urgently do so as well given that their retrospective powers might only be 10 years).[2]
    Regulators should also review the 2023 Google-DeepMind internal merger. DeepMind and Google Brain are key players in AI. In 2023, they merged into Google DeepMind. This compromises independence, reduces competition for AI talent and resources, and limits alternatives for collaboration partners.
    Though they are both part of Google, regulators can scrutinize this, regardless of corporate structure. For instance, UK regulators have intervened in M&A of enterprises already under common ownership - especially in Tech (cf UK regulators ordered FB to sell GIPHY).
    And so, regulators should consider breaking up Google Deepmind as per recent proposals:
    A new paper 'Unscrambling the eggs: breaking up consummated mergers and dominant firms' by economists at Imperial cites Google DeepMind as a firm that could be unmerged. [3]
    A new Brookings paper also argues that if other means to ensure fair markets fail, then as a last resort, foundation model firms may need to be broken up on the basis of functions, akin to how we broke up AT&T.[4]
    Relatedly, some top economists agree that we should designate Google Search as 'platform utilities' and break it apart from any participant on that platform, most agree that we should explore this further to weigh costs and benefits.[5]
    Indeed, the EU accuses Google of abusing dominance in ad tech and may force it to sell parts of its firm.[6]
    Kustomer, a firm of a similar size to DeepMind bought by Facebook, recently spun out again and shows this is possible.
    Finally, DeepMind itself has in the past tried to break away from Google.[7]
    Since DeepMind's AI improves all Google products, regulators should work cross-departmentally to scrutinize both mergers above on the following grounds:
    Market dominance: Google dominates the field of AI, surpassing all universities in terms of high-quality publications:
    Tax avoidance: Despite billions in UK profits yearly, Google is only taxed $60M.[8] DeepMind's is only taxed ~$1M per year on average. [9],[10] We should tax them more fairly. DeepMind's recent revenue jump is due to creative accounting, as it doesn't have many revenue streams, but almost all are based on how much Google arbitrarily pays for internal services.
    Indeed, Google just waived $1.5B in DeepMind's 'startup debt' [11],[12] despite DeepMind's CEO boasting that they have a unique opportunity as part of Google and its dozens of billion user products by immediately shipping their advances into[13] and saving Google hundreds of millions in energy costs.[14] About 85% of the innovations causing the recent AI boom came from Google DeepMind.[15] DeepMind also holds 560 patents,[16] and this IP is very hard to value and tax.
    Such a bad precedent might cause either more tax avoidance by OpenAI, Microsoft AI, Anthropic, Palantir, and A16z setting up UK offices, or it will give Google an unfair edge over these smaller firms).
    Public interest concerns: DeepMind's AI improves YouTube's algorithm and thus DeepMind indirectly polarizes voters.[17] Regulators s

    • 9 min
    LW - Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) by Ruby

    LW - Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) by Ruby

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm), published by Ruby on April 23, 2024 on LessWrong.
    For the last month, @RobertM and I have been exploring the possible use of recommender systems on LessWrong. Today we launched our first site-wide experiment in that direction.
    (In the course of our efforts, we also hit upon a frontpage refactor that we reckon is pretty good: tabs instead of a clutter of different sections. For now, only for logged-in users. Logged-out users see the "Latest" tab, which is the same-as-usual list of posts.)
    Why algorithmic recommendations?
    A core value of LessWrong is to be timeless and not news-driven. However, the central algorithm by which attention allocation happens on the site is the Hacker News algorithm[1], which basically only shows you things that were posted recently, and creates a strong incentive for discussion to always be centered around the latest content.
    This seems very sad to me. When a new user shows up on LessWrong, it seems extremely unlikely that the most important posts for them to read were all written within the last week or two.
    I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility. Very simple. When I vote, I basically know the full effect this has on what is shown to other users or to myself.
    But I think the cost of that simplicity has become too high, especially as older content makes up a larger and larger fraction of the best content on the site, and people have been becoming ever more specialized in the research and articles they publish on the site.
    So we are experimenting with changing things up. I don't know whether these experiments will ultimately replace the Hacker News algorithm, but as the central attention allocation mechanism on the site, it definitely seems worth trying out and iterating on. We'll be trying out a bunch of things from reinforcement-learning based personalized algorithms, to classical collaborative filtering algorithms to a bunch of handcrafted heuristics that we'll iterate on ourselves.
    The Concrete Experiment
    Our first experiment is Recombee, a recommendations SaaS, since spinning up our RL agent pipeline would be a lot of work.We feed it user view and vote history. So far, it seems that it can be really good when it's good, often recommending posts that people are definitely into (and more so than posts in the existing feed).
    Unfortunately it's not reliable across users for some reason and we've struggled to get it to reliably recommend the most important recent content, which is an important use-case we still want to serve.
    Our current goal is to produce a recommendations feed that both makes people feel like they're keeping up to date with what's new (something many people care about) and also suggest great reads from across LessWrong's entire archive.
    The Recommendations tab we just launched has a feed using Recombee recommendations. We're also getting started using Google's Vertex AI offering. A very early test makes it seem possibly better than Recombee. We'll see.
    (Some people on the team want to try throwing relevant user history and available posts into an LLM and seeing what it recommends, though cost might be prohibitive for now.)
    Unless you switch to the "Recommendations" tab, nothing changes for you. "Latest" is the default tab and is using the same old HN algorithm that you are used to. I'll feel like we've succeeded when people switch to "Recommended" and tell us that they prefer it. At that point, we might make "Recommended" the default tab.
    Preventing Bad Outcomes
    I do think there are ways for recommendations to end up being pretty awful. I think many readers have encountered at least one content recommendation algorithm that isn't givi

    • 7 min
    EA - On failing to get EA jobs: My experience and recommendations to EA orgs by Ávila Carmesí

    EA - On failing to get EA jobs: My experience and recommendations to EA orgs by Ávila Carmesí

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On failing to get EA jobs: My experience and recommendations to EA orgs, published by Ávila Carmesí on April 23, 2024 on The Effective Altruism Forum.
    This is an anonymous account (Ávila is not a real person). I am posting on this account to avoid potentially negative effects on my future job prospects.
    SUMMARY:
    I've been rejected from 18 jobs or internships, 12 of which are "in EA."
    I briefly spell out my background information and show all my rejections.
    Then, I list some recommendations to EA orgs on how they can (hopefully) improve the hiring process.
    This post probably falls under the category of "it's hard, even for high-achievers, to get an EA job." But there's still the (probably bigger) problem of what there is for "mediocre" EAs to do in a movement that prizes extremely high-achieving individuals.
    If this post improves hiring a little bit at a few EA orgs, I will be a happy person.
    BACKGROUND
    Entry-level EA jobs and internships have been getting very competitive. It is common for current applicants to hear things like "out of 600, we can only take 20" (CHAI), or "only 3% of applicants made it this far" (IAPS) or "It's so competitive it's probably not even worth applying" (GovAI representative). So far, I haven't been accepted to any early-career AI safety opportunities, and I've mostly been rejected in the first round.
    ABOUT ME
    I'll keep this section somewhat vague to protect my anonymity. I'm mostly applying to AI safety-related jobs and internships.
    I am graduating from a top university with honors and a perfect GPA. I have 3 stellar letters of recommendation, 3 research internships in different areas, part-time work at a research lab, I lead two relevant student clubs, I've also worked part-time at 3 other non-research (though still academic) jobs. I can show very high interest and engagement for the programs I am applying to. I've co-authored several conference papers and have done independent research.
    I've done a couple of "cool" things that show potential (but mentioning them here might compromise my anonymity). I've also gotten my resume reviewed by two hiring professionals who said it looked great. Most of this research and leadership experience is very relevant to the jobs I am applying to.
    One potentially big thing working against me is that I'm neither a CS nor public policy/IR person (or something super policy-relevant like that).
    JOBS/INTERNSHIPS/FUNDING I'VE APPLIED TO
    Rejections
    Horizon Junior Fellowship - Rejected on round 3/4
    GovAI summer fellowship - Rejected first round
    ERA>Krueger Lab - Rejected first round
    fp21 internship - Never heard back
    BERI (full-time job) - Rejected first round
    MIT FutureTech (part-time) - Job filled before interview
    PIBBSS Fellowship - Rejected first round
    Berkeley Risk and Security Lab - Never heard back
    CLR Fellowship - Rejected first round
    ERA Fellowship - Rejected first round
    CHAI Internship - Rejected first round
    UChicago XLab - Rejected first round
    EA LTFF research grant - Rejected
    Open Phil research grant - Rejected
    Acceptances
    None yet!
    Note: I've also applied to jobs that align with my principles but are not at EA orgs. I'm also still applying to jobs, so this is not (yet) a pity party.
    MY EXPECTATIONS
    Although I expected these to be quite competitive, I was surprised to be eliminated during the first round for so many of them. That's because most of these are specifically meant for early-career people and I'd say I have a great resume/credentials/demonstrated skills for an early career person.
    RECOMMENDATIONS TO EA ORGS
    As someone who's spent a lot of time doing EA org applications, below are some tentative thoughts on how to (probably) improve them. Please let me know what you think in the comments.
    Increase the required time-commitment as the application progresses.
    By this I mean, start out with

    • 8 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
TED Talks Daily
TED
The Rich Roll Podcast
Rich Roll
Do The Work
Do The Work
Mick Unplugged
Mick Hunt

You Might Also Like

Dwarkesh Podcast
Dwarkesh Patel
Conversations with Tyler
Mercatus Center at George Mason University
Making Sense with Sam Harris
Sam Harris
Lex Fridman Podcast
Lex Fridman
LessWrong (30+ Karma)
LessWrong
The Ezra Klein Show
New York Times Opinion