1,999 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library The Nonlinear Fund

    • Education
    • 4.6 • 7 Ratings

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - Animals in Cost-Benefit Analysis by Vasco Grilo

    EA - Animals in Cost-Benefit Analysis by Vasco Grilo

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animals in Cost-Benefit Analysis, published by Vasco Grilo on April 25, 2024 on The Effective Altruism Forum.
    This is a linkpost for Animals in Cost-Benefit Analysis by Andrew Stawasz. The article is forthcoming in the University of Michigan Journal of Law Reform.
    Abstract
    Federal agencies' cost-benefit analyses do not capture nonhuman animals' ("animals'") interests. This omission matters. Cost-benefit analysis drives many regulatory decisions that substantially affect many billions of animals. That omission creates a regulatory blind spot that is untenable as a matter of morality and of policy.
    This Article advances two claims related to valuing animals in cost-benefit analyses. The Weak Claim argues that agencies typically may do so. No legal prohibitions usually exist, and such valuation is within agencies' legitimate discretion. The Strong Claim argues that agencies often must do so if a policy would substantially affect animals. Cost-benefit analysis is concerned with improving welfare, and no argument for entirely omitting animals' welfare holds water.
    Agencies have several options to implement this vision. These options include, most preferably, human-derived valuations (albeit in limited circumstances), interspecies comparisons, direct estimates of animals' preferences, and, at a minimum, breakeven analysis. Agencies could deal with uncertainty by conducting sensitivity analyses or combining methods.
    For any method, agencies should consider what happens when a policy would save animals from some bad outcomes and what form a mandate to value animals should take.
    Valuing animals could have mattered for many cost-benefit analyses, including those for pet-food safety regulations and a rear backup camera mandate. As a sort of "proof of concept," this Article shows that even a simple breakeven analysis from affected animals' perspective paints even the thoroughly investigated policy decision at issue in Entergy Corp. v. Riverkeeper, Inc. in an informative new light.
    Table of contents
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 2 min
    LW - WSJ: Inside Amazon's Secret Operation to Gather Intel on Rivals by trevor

    LW - WSJ: Inside Amazon's Secret Operation to Gather Intel on Rivals by trevor

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: WSJ: Inside Amazon's Secret Operation to Gather Intel on Rivals, published by trevor on April 25, 2024 on LessWrong.
    The operation, called Big River Services International, sells around $1 million a year of goods through e-commerce marketplaces including
    eBay,
    Shopify,
    Walmart and
    Amazon AMZN 1.49%increase; green up pointing triangle.com under brand names such as Rapid Cascade and Svea Bliss. "We are entrepreneurs, thinkers, marketers and creators," Big River says on its website. "We have a passion for customers and aren't afraid to experiment."
    What the website doesn't say is that Big River is an arm of Amazon that surreptitiously gathers intelligence on the tech giant's competitors.
    Born out of a 2015 plan code named "Project Curiosity," Big River uses its sales across multiple countries to obtain pricing data, logistics information and other details about rival e-commerce marketplaces,
    logistics operations and payments services, according to people familiar with Big River and corporate documents viewed by The Wall Street Journal. The team then shared that information with Amazon to incorporate into decisions about its own business.
    ...
    The story of Big River offers new insight into Amazon's
    elaborate efforts to stay ahead of rivals. Team members attended their rivals' seller conferences and met with competitors identifying themselves only as employees of Big River Services, instead of disclosing that they worked for Amazon.
    They were given non-Amazon email addresses to use externally - in emails with people at Amazon, they used Amazon email addresses - and took other extraordinary measures to keep the project secret. They disseminated their reports to Amazon executives using printed, numbered copies rather than email. Those who worked on the project weren't even supposed to discuss the relationship internally with most teams at Amazon.
    An internal crisis-management paper gave advice on what to say if discovered. The response to questions should be: "We make a variety of products available to customers through a number of subsidiaries and online channels." In conversations, in the event of a leak they were told to focus on the group being formed to improve the seller experience on Amazon, and say that such research is normal, according to people familiar with the discussions.
    Senior Amazon executives, including Doug Herrington, Amazon's current CEO of Worldwide Amazon Stores, were regularly briefed on the Project Curiosity team's work, according to one of the people familiar with Big River.
    ...
    Virtually all companies research their competitors, reading public documents for information, buying their products or shopping their stores. Lawyers say there is a difference between such corporate intelligence gathering of publicly available information, and what is known as corporate or industrial espionage.
    Companies can get into legal trouble for actions such as hiring a rival's former employee to obtain trade secrets or hacking a rival. Misrepresenting themselves to competitors to gain proprietary information can lead to suits on trade secret misappropriation, said Elizabeth Rowe, a professor at the University of Virginia School of Law who specializes in trade secret law.
    ...
    The benchmarking team pitched "Project Curiosity" to senior management and got the approval to buy inventory, use a shell company and find warehouses in the U.S., Germany, England, India and Japan so they could pose as sellers on competitors' websites.
    ...
    Once launched, the focus of the project quickly started shifting to gathering information about rivals, the people said.
    ...
    The team presented its findings from being part of the FedEx program to senior Amazon logistics leaders. They used the code name "OnTime Inc." to refer to FedEx. Amazon made changes to its Fulfillment by Amazon service to

    • 8 min
    LW - "Why I Write" by George Orwell (1946) by Arjun Panickssery

    LW - "Why I Write" by George Orwell (1946) by Arjun Panickssery

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Why I Write" by George Orwell (1946), published by Arjun Panickssery on April 25, 2024 on LessWrong.
    People have been posting great essays so that they're "fed through the standard LessWrong algorithm." This essay is in the public domain in the UK but not the US.
    From a very early age, perhaps the age of five or six, I knew that when I grew up I should be a writer. Between the ages of about seventeen and twenty-four I tried to abandon this idea, but I did so with the consciousness that I was outraging my true nature and that sooner or later I should have to settle down and write books.
    I was the middle child of three, but there was a gap of five years on either side, and I barely saw my father before I was eight. For this and other reasons I was somewhat lonely, and I soon developed disagreeable mannerisms which made me unpopular throughout my schooldays. I had the lonely child's habit of making up stories and holding conversations with imaginary persons, and I think from the very start my literary ambitions were mixed up with the feeling of being isolated and undervalued.
    I knew that I had a facility with words and a power of facing unpleasant facts, and I felt that this created a sort of private world in which I could get my own back for my failure in everyday life. Nevertheless the volume of serious - i.e. seriously intended - writing which I produced all through my childhood and boyhood would not amount to half a dozen pages. I wrote my first poem at the age of four or five, my mother taking it down to dictation.
    I cannot remember anything about it except that it was about a tiger and the tiger had 'chair-like teeth' - a good enough phrase, but I fancy the poem was a plagiarism of Blake's 'Tiger, Tiger'. At eleven, when the war or 1914-18 broke out, I wrote a patriotic poem which was printed in the local newspaper, as was another, two years later, on the death of Kitchener. From time to time, when I was a bit older, I wrote bad and usually unfinished 'nature poems' in the Georgian style.
    I also, about twice, attempted a short story which was a ghastly failure. That was the total of the would-be serious work that I actually set down on paper during all those years.
    However, throughout this time I did in a sense engage in literary activities. To begin with there was the made-to-order stuff which I produced quickly, easily and without much pleasure to myself. Apart from school work, I wrote vers d'occasion, semi-comic poems which I could turn out at what now seems to me astonishing speed - at fourteen I wrote a whole rhyming play, in imitation of Aristophanes, in about a week - and helped to edit school magazines, both printed and in manuscript.
    These magazines were the most pitiful burlesque stuff that you could imagine, and I took far less trouble with them than I now would with the cheapest journalism. But side by side with all this, for fifteen years or more, I was carrying out a literary exercise of a quite different kind: this was the making up of a continuous "story" about myself, a sort of diary existing only in the mind. I believe this is a common habit of children and adolescents.
    As a very small child I used to imagine that I was, say, Robin Hood, and picture myself as the hero of thrilling adventures, but quite soon my "story" ceased to be narcissistic in a crude way and became more and more a mere description of what I was doing and the things I saw. For minutes at a time this kind of thing would be running through my head: 'He pushed the door open and entered the room.
    A yellow beam of sunlight, filtering through the muslin curtains, slanted on to the table, where a matchbox, half-open, lay beside the inkpot. With his right hand in his pocket he moved across to the window. Down in the street a tortoiseshell cat was chasing a dead leaf,' etc., etc. Thi

    • 13 min
    AF - AXRP Episode 29 - Science of Deep Learning with Vikrant Varma by DanielFilan

    AF - AXRP Episode 29 - Science of Deep Learning with Vikrant Varma by DanielFilan

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 29 - Science of Deep Learning with Vikrant Varma, published by DanielFilan on April 25, 2024 on The AI Alignment Forum.
    In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand.
    Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on certain tasks, they initially memorize their training data (achieving their training goal in a way that doesn't generalize), but then suddenly switch to understanding the 'real' solution in a way that generalizes.
    What's going on with these discoveries? Are they all they're cracked up to be, and if so, how are they working? In this episode, I talk to Vikrant Varma about his research getting to the bottom of these questions.
    Topics we discuss:
    Challenges with unsupervised LLM knowledge discovery, aka contra CCS
    What is CCS?
    Consistent and contrastive features other than model beliefs
    Understanding the banana/shed mystery
    Future CCS-like approaches
    CCS as principal component analysis
    Explaining grokking through circuit efficiency
    Why research science of deep learning?
    Summary of the paper's hypothesis
    What are 'circuits'?
    The role of complexity
    Many kinds of circuits
    How circuits are learned
    Semi-grokking and ungrokking
    Generalizing the results
    Vikrant's research approach
    The DeepMind alignment team
    Follow-up work
    Daniel Filan: Hello, everybody. In this episode I'll be speaking with Vikrant Varma, a research engineer at Google DeepMind, and the technical lead of their sparse autoencoders effort. Today, we'll be talking about his research on problems with contrast-consistent search, and also explaining grokking through circuit efficiency. For links what we're discussing, you can check the description of this episode and you can read the transcript at axrp.net.
    All right, well, welcome to the podcast.
    Vikrant Varma: Thanks, Daniel. Thanks for having me.
    Challenges with unsupervised LLM knowledge discovery, aka contra CCS
    What is CCS?
    Daniel Filan: Yeah. So first, I'd like to talk about this paper. It is called Challenges with Unsupervised LLM Knowledge Discovery, and the authors are Sebastian Farquhar, you, Zachary Kenton, Johannes Gasteiger, Vladimir Mikulik, and Rohin Shah. This is basically about this thing called CCS. Can you tell us: what does CCS stand for and what is it?
    Vikrant Varma: Yeah, CCS stands for contrastive-consistent search. I think to explain what it's about, let me start from a more fundamental problem that we have with advanced AI systems. One of the problems is that when we train AI systems, we're training them to produce outputs that look good to us, and so this is the supervision that we're able to give to the system. We currently don't really have a good idea of how an AI system or how a neural network is computing those outputs.
    And in particular, we're worried about the situation in the future when the amount of supervision we're able to give it causes it to achieve a superhuman level of performance at that task. By looking at the network, we can't know how this is going to behave in a new situation.
    And so the Alignment Research Center put out a report recently about this problem. They named a potential part of this problem as "eliciting latent knowledge". What this means is if your model is, for example, really, really good at figuring out what's going to happen next in a video, as in it's able to predict the next frame of a video really well given a prefix of the video, this must mean that it has some sort of model of what's going on in the world.
    Instead of using the outputs of the model, if you could directly look at what it understands about the world, then potentially, you could use that information in a much safer

    • 1 hr 32 min
    AF - Improving Dictionary Learning with Gated Sparse Autoencoders by Neel Nanda

    AF - Improving Dictionary Learning with Gated Sparse Autoencoders by Neel Nanda

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving Dictionary Learning with Gated Sparse Autoencoders, published by Neel Nanda on April 25, 2024 on The AI Alignment Forum.
    Authors: Senthooran Rajamanoharan*, Arthur Conmy*, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, Neel Nanda
    A new paper from the Google DeepMind mech interp team: Improving Dictionary Learning with Gated Sparse Autoencoders!
    Gated SAEs are a new Sparse Autoencoder architecture that seems to be a significant Pareto-improvement over normal SAEs, verified on models up to Gemma 7B. They are now our team's preferred way to train sparse autoencoders, and we'd love to see them adopted by the community! (Or to be convinced that it would be a bad idea for them to be adopted by the community!)
    They achieve similar reconstruction with about half as many firing features, and while being either comparably or more interpretable (confidence interval for the increase is 0%-13%).
    See Sen's Twitter summary, my Twitter summary, and the paper!
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    • 1 min
    EA - Today is World Malaria Day (April 25) by tobytrem

    EA - Today is World Malaria Day (April 25) by tobytrem

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Today is World Malaria Day (April 25), published by tobytrem on April 25, 2024 on The Effective Altruism Forum.
    Malaria is massive. Our World in Data writes: "Over half a million people died from the disease each year in the 2010s. Most were children, and the disease is one of the leading causes of child mortality." Or, as Rob Mather, CEO of the Against Malaria Foundation (AMF) phrases it: the equivalent of seven jumbo jets full of children die of Malaria each day.
    But I don't see malaria in the news that much. This is partly because it was eradicated from Western countries over the course of the 20th century, both because of intentional interventions such as insecticide, and because of the draining of swamp lands and building of better housing. But it's also because malaria is a slow catastrophe, like poverty, and climate change.
    We've dealt with it to varying degrees throughout history, and though it is an emergency to anyone affected by it, to the rest of us, it's a tropical disease which has been around forever. It can be hard to generate urgency when a problem has existed for so long.
    But there is a lot that we can do. Highly effective charities work on malaria; the Against Malaria Foundation (AMF) distributes insecticide treated bed-nets, and a Malaria Consortium program offers seasonal malaria chemoprevention treatment- both are GiveWell Top Charities. Two malaria vaccines, RTS,S and the cheaper R21[1], have been developed in recent years[2]. Malaria is preventable.
    Though malaria control and eradication is funded by international bodies such as The Global Fund, there isn't nearly enough money being spent on it. AMF has an immediate funding gap of $185.78m. That's money for nets they know are needed. And though vaccines are being rolled out, progress has been slower than it could be, and the agencies distributing them have been criticised for lacking urgency.
    Malaria is important, malaria is neglected, malaria is tractable.
    If you want to do something about malaria today, consider donating to Givewell's recommendations: AMF, or the Malaria Consortium:
    Related links I recommend
    Why we didn't get a malaria vaccine sooner; an article in Works in Progress.
    WHO's World Malaria Day 2024 announcement.
    The Our World in Data page on malaria.
    Audio AMA, with Rob Mather, CEO of AMF (transcript).
    From SoGive, an EA Forum discussion of the cost-effectiveness of malaria vaccines, with cameos from 1DaySooner and GiveWell.
    For more info, see GiveWell's page on malaria vaccines.
    The story of Tu Youyou, a researcher who helped develop an anti-malarial drug in Mao's China.
    What is an Emergency? The Case for Rapid Malaria Vaccination, from Marginal Revolution.
    More content on the Forum's Malaria tag.
    ^
    R21 offers up to 75% reduction of symptomatic malaria cases when delivered at the right schedule.
    ^
    Supported by Open Philanthropy and GiveWell.
    Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    • 3 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Small Doses with Amanda Seales
Urban One Podcast Network
TED Talks Daily
TED
Mick Unplugged
Mick Hunt
The Rich Roll Podcast
Rich Roll

You Might Also Like

Dwarkesh Podcast
Dwarkesh Patel
Last Week in AI
Skynet Today
Making Sense with Sam Harris
Sam Harris
Conversations with Tyler
Mercatus Center at George Mason University
The Ezra Klein Show
New York Times Opinion
Lex Fridman Podcast
Lex Fridman