493 episodes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

The Nonlinear Library: LessWrong Top Posts The Nonlinear Fund

    • Education
    • 4.0 • 1 Rating

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Eight Short Studies On Excuses by Scott Alexander

    Eight Short Studies On Excuses by Scott Alexander

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: Eight Short Studies On Excuses , published by Scott Alexander on LessWrong.
    The Clumsy Game-Player
    You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.
    "Uh, sorry," says your partner. "My finger slipped."
    "I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."
    "Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."
    "True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."
    "How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."
    You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.
    After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."
    The Lazy Student
    You are a perfectly utilitarian school teacher, who attaches exactly the same weight to others' welfare as to your own. You have to have the reports of all fifty students in your class ready by the time midterm grades go out on January 1st. You don't want to have to work during Christmas vacation, so you set a deadline that all reports must be in by December 15th or you won't grade them and the students will fail the class. Oh, and your class is Economics 101, and as part of a class project all your students have to behave as selfish utility-maximizing agents for the year.
    It costs your students 0 utility to turn in the report on time, but they gain +1 utility by turning it in late (they enjoy procrastinating). It costs you 0 utility to grade a report turned in before December 15th, but -30 utility to grade one after December 15th. And students get 0 utility from having their reports graded on time, but get -100 utility from having a report marked incomplete and failing the class.
    If you say "There's no penalty for turning in your report after deadline," then the students will procrastinate and turn in their reports late, for a total of +50 utility (1 per student times fifty students). You will have to grade all fifty reports during Christmas break, for a total of - 1500 utility (-30 per report times fifty reports). Total utility is -1450.
    So instead you say "If you don't turn in your report on time, I won't grade it." All students calculate the cost of being late, which is +1 utility from procrastinating and -100 from failing the class, and turn in their reports on time. You get all reports graded before Christmas, no students fail the class, and total utility loss is zero. Yay!
    Or else - one student comes to you the day after deadline and says "Sorry, I was really tired yesterday, so I really didn't want to come all the way here to hand in my report. I expect you'll grade my report anyway, because I know you to be a perfect utilitarian, an

    • 15 min
    Making Vaccine by johnswentworth

    Making Vaccine by johnswentworth

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: Making Vaccine, published by johnswentworth on LessWrong.
    Back in December, I asked how hard it would be to make a vaccine for oneself. Several people pointed to radvac. It was a best-case scenario: an open-source vaccine design, made for self-experimenters, dead simple to make with readily-available materials, well-explained reasoning about the design, and with the name of one of the world’s more competent biologists (who I already knew of beforehand) stamped on the whitepaper. My girlfriend and I made a batch a week ago and took our first booster yesterday.
    This post talks a bit about the process, a bit about our plan, and a bit about motivations. Bear in mind that we may have made mistakes - if something seems off, leave a comment.
    The Process
    All of the materials and equipment to make the vaccine cost us about $1000. We did not need any special licenses or anything like that. I do have a little wetlab experience from my undergrad days, but the skills required were pretty minimal.
    One vial of custom peptide - that little pile of white powder at the bottom.
    The large majority of the cost (about $850) was the peptides. These are the main active ingredients of the vaccine: short segments of proteins from the COVID virus. They’re all 25 amino acids, so far too small to have any likely function as proteins (for comparison, COVID’s spike protein has 1273 amino acids). They’re just meant to be recognized by the immune system: the immune system learns to recognize these sequences, and that’s what provides immunity.
    Each of six peptides came in two vials of 4.5 mg each. These are the half we haven't dissolved; we keep them in the freezer as backups.
    The peptides were custom synthesized. There are companies which synthesize any (short) peptide sequence you want - you can find dozens of them online. The cheapest options suffice for the vaccine - the peptides don’t need to be “purified” (this just means removing partial sequences), they don’t need any special modifications, and very small amounts suffice. The minimum order size from the company we used would have been sufficient for around 250 doses. We bought twice that much (9 mg of each peptide), because it only cost ~$50 extra to get 2x the peptides and extras are nice in case of mistakes.
    The only unusual hiccup was an email about customs restrictions on COVID-related peptides. Apparently the company was not allowed to send us 9 mg in one vial, but could send us two vials of 4.5 mg each for each peptide. This didn’t require any effort on my part, other than saying “yes, two vials is fine, thankyou”. Kudos to their customer service for handling it.
    Equipment - stir plate, beakers, microcentrifuge tubes, 10 and 50 mL vials, pipette (0.1-1 mL range), and pipette tips. It's all available on Amazon.
    Other materials - these are sold as supplements. We also need such rare and costly ingredients as vinegar and deionized water. Also all available on Amazon.
    Besides the peptides, all the other materials and equipment were on amazon, food grade, in quantities far larger than we are ever likely to use. Peptide synthesis and delivery was the slowest; everything else showed up within ~3 days of ordering (it’s amazon, after all).
    The actual preparation process involves three main high-level steps:
    Prepare solutions of each component - basically dissolve everything separately, then stick it in the freezer until it’s needed.
    Circularize two of the peptides. Concretely, this means adding a few grains of activated charcoal to the tube and gently shaking it for three hours. Then, back in the freezer.
    When it’s time for a batch, take everything out of the freezer and mix it together.
    Prepping a batch mostly just involves pipetting things into a beaker on a stir plate, sometimes drop-by-drop.
    Finally, a dose goes

    • 9 min
    The Best Textbooks on Every Subject by lukeprog

    The Best Textbooks on Every Subject by lukeprog

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: The Best Textbooks on Every Subject, published by lukeprog on LessWrong.
    For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
    I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks.
    But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful.
    What if we could compile a list of the best textbooks on every subject? That would be extremely useful.
    Let's do it.
    There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules:
    Post the title of your favorite textbook on a given subject.
    You must have read at least two other textbooks on that same subject.
    You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them.
    Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting.
    I'll start the list with three of my own recommendations...
    Subject: History of Western Philosophy
    Recommendation: The Great Conversation, 6th edition, by Norman Melchert
    Reason: The most popular history of western philosophy is Bertrand Russell's A History of Western Philosophy, which is exciting but also polemical and inaccurate. More accurate but dry and dull is Frederick Copelston's 11-volume A History of Philosophy. Anthony Kenny's recent 4-volume history, collected into one book as A New History of Western Philosophy, is both exciting and accurate, but perhaps too long (1000 pages) and technical for a first read on the history of philosophy. Melchert's textbook, The Great Conversation, is accurate but also the easiest to read, and has the clearest explanations of the important positions and debates, though of course it has its weaknesses (it spends too many pages on ancient Greek mythology but barely mentions Gottlob Frege, the father of analytic philosophy and of the philosophy of language). Melchert's history is also the only one to seriously cover the dominant mode of Anglophone philosophy done today: naturalism (what Melchert calls "physical realism"). Be sure to get the 6th edition, which has major improvements over the 5th edition.
    Subject: Cognitive Science
    Recommendation: Cognitive Science, by Jose Luis Bermudez
    Reason: Jose Luis Bermudez's Cognitive Science: An Introduction to the Science of Mind does an excellent job setting the historical and conceptual context for cognitive science, and draws fairly from all the fields involved in this heavily interdisciplinary science. Bermudez does a good job of making himself invisible, and the explanations here are some of the clearest available. In contrast, Paul Thagard's Mind: Introduction to Cognitive Science skips the context and jumps right into a systematic comparison (by explanatory merit) of the leading theories of mental representation: logic, rules, concepts, analogies, images, and neural networks. The book is o

    • 15 min
    Preface by Eliezer Yudkowsky

    Preface by Eliezer Yudkowsky

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: Preface, published Eliezer Yudkowsky on LessWrong.
    You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I’m fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then.
    It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.
    In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.”
    Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)
    A third huge mistake I made was to focus too much on rational belief, too little on rational action.
    The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence.
    That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he’s rewritten a bit of it).
    My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.
    Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.
    Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)
    To be able to look backwards and say that you’ve “failed” implies that you had goals

    • 5 min
    Rationalism before the Sequences by Eric Raymond

    Rationalism before the Sequences by Eric Raymond

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: Rationalism before the Sequences, published by Eric Raymond on LessWrong.
    I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed.
    My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique.
    My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.
    My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.
    Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.
    Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.
    Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly.
    When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation.
    Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined.
    One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already."
    Around the time Nancy and I first met, some years before Eliezer Yudk

    • 18 min
    Schelling fences on slippery slopes by Scott Alexander

    Schelling fences on slippery slopes by Scott Alexander

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
    This is: Schelling fences on slippery slopes, published by Scott Alexander on LessWrong.
    Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien:
    "Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial."
    And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want."
    This post is about some of the replies you might give the alien.
    Abandoning the Power of Choice
    This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points.
    For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot.
    I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument.
    The Legend of Murder-Gandhi
    Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse.
    But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer.
    Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again.
    Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals.
    Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody.
    What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight.
    Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original?
    Maybe Gandhi's best

    • 9 min

Customer Reviews

4.0 out of 5
1 Rating

1 Rating

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Do The Work
Do The Work
Mick Unplugged
Mick Hunt
TED Talks Daily
TED
Try This
The Washington Post