LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 15H AGO

    “Genomic emancipation contra eugenics” by TsviBT

    PDF version. berkeleygenomics.org. This is a linkpost for "Genomic emancipation contra eugenics"; a few of the initial sections are reproduced here. Section links may not work. Introduction Reprogenetics refers to biotechnological tools used to affect the genes of a future child. How can society develop and use reprogenetic technologies in a way that ends up going well? This essay investigates the history and nature of historical eugenic ideologies. I'll extract some lessons about how society can think about reprogenetics differently from the eugenicists, so that we don't trend towards the sort of abuses that were historically justified by eugenics. (This essay is written largely as I thought and investigated, except that I wrote the synopsis last. So the ideas are presented approximately in order of development, rather than logically. If you'd like a short thing to read, read the synopsis.) Synopsis Some technologies are being developed that will make it possible to affect what genes a future child receives. These technologies include polygenic embryo selection, embryo editing, and other more advanced technologies [1] . Regarding these technologies, we ask: Can we decide to not abuse these tools? And: How [...] --- Outline: (00:25) Introduction (01:12) Synopsis The original text contained 3 footnotes which were omitted from this narration. --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/yH9FtLgPJxbimamKg/genomic-emancipation-contra-eugenics --- Narrated by TYPE III AUDIO.

    10 min
  2. 15H AGO

    “Why we should expect ruthless sociopath ASI” by Steven Byrnes

    The conversation begins (Fictional) Optimist: So you expect future artificial superintelligence (ASI) “by default”, i.e. in the absence of yet-to-be-invented techniques, to be a ruthless sociopath, happy to lie, cheat, and steal, whenever doing so is selfishly beneficial, and with callous indifference to whether anyone (including its own programmers and users) lives or dies? Me: Yup! (Alas.) Optimist: …Despite all the evidence right in front of our eyes from humans and LLMs. Me: Yup! Optimist: OK, well, I’m here to tell you: that is a very specific and strange thing to expect, especially in the absence of any concrete evidence whatsoever. There's no reason to expect it. If you think that ruthless sociopathy is the “true core nature of intelligence” or whatever, then you should really look at yourself in a mirror and ask yourself where your life went horribly wrong. Me: Hmm, I think the “true core nature of intelligence” is above my pay grade. We should probably just talk about the issue at hand, namely future AI algorithms and their properties. …But I actually agree with you that ruthless sociopathy is a very specific and strange thing for me to expect. Optimist: Wait, you—what?? Me: Yes! Like [...] --- Outline: (00:11) The conversation begins (03:54) Are people worried about LLMs causing doom? (06:23) Positive argument that brain-like RL-agent ASI would be a ruthless sociopath (11:28) Circling back LLMs: imitative learning vs ASI The original text contained 5 footnotes which were omitted from this narration. --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16 min
  3. 18H AGO

    “Irrationality is Socially Strategic” by Valentine

    It seems to me that the Hamming problem for developing a formidable art of rationality is, what to do about problems that systematically fight being solved. And in particular, how to handle bad reasoning that resists being corrected. I propose that each such stubborn problem is nearly always, in practice, part of a solution to some social problem. In other words, having the problem is socially strategic. If this conjecture is right, then rationality must include a process of finding solutions to those underlying social problems that don’t rely on creating and maintaining some second-order problem. Particularly problems that convolute conscious reasoning and truth-seeking. The rest of this post will be me fleshing out what I mean, sketching why I think it's true, and proposing some initial steps toward a solution to this Hamming problem. Truth-seeking vs. embeddedness I’ll assume you’re familiar with Scott & Abram's distinction between Cartesian vs. embedded agency. If not, I suggest reading their post's comic, stopping when it mentions Marcus Hutter and AIXI. (In short: a Cartesian agent is clearly distinguishable from the space the problems it's solving exists in, whereas an embedded agent is not. Contrast an entomologist studying an ant colony (Cartesian) [...] --- Outline: (00:59) Truth-seeking vs. embeddedness (02:57) Protected problems (06:42) Dissolving protected problems (08:11) Develop inner privacy (11:54) Look for the social payoff (14:51) Change your social incentives (18:52) The right social scene would help a lot (20:59) Summary --- First published: February 18th, 2026 Source: https://www.lesswrong.com/posts/tynBnHYiGhyfBbztq/irrationality-is-socially-strategic --- Narrated by TYPE III AUDIO.

    23 min
  4. 1D AGO

    “You’re an AI Expert – Not an Influencer” by Max Winga

    Your hot takes are killing your credibility. Prior to my last year at ControlAI, I was a physicist working on technical AI safety research. Like many of those warning about the dangers of AI, I don’t come from a background in public communications, but I’ve quickly learned some important rules. The #1 rule that I’ve seen far too many others in this field break is that You’re an AI Expert - Not an Influencer. When communicating to an audience, your persona is one of two broad categories: Influencer or Professional Influencers are individuals who build an audience around themselves as a person. Their currency is popularity and their audience values them for who they are and what they believe, not just what they know. Professionals are individuals who appear in the public eye as representatives of their expertise or organization. Their currency is credibility and their audience values them for what they know and what they represent, not who they are. So… let's say you’re trying to be a public figure making a difference about AI risk. You’ve been on a podcast or two, maybe even on The News. You might work at an AI policy organization, or [...] --- Outline: (00:11) Your hot takes are killing your credibility. (02:10) STOP - What Would Media Training Steve do? (05:22) Dont Feed Your Enemies (07:07) The Luxury of Not Being a Politician (09:33) So How Do You Deal With Politics? (10:58) Conclusion --- First published: February 17th, 2026 Source: https://www.lesswrong.com/posts/hCtm7rxeXaWDvrh4j/you-re-an-ai-expert-not-an-influencer --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 min
  5. 1D AGO

    “The brain is a machine that runs an algorithm” by Steven Byrnes

    Some people say “the brain is a computer”. Other people say “well, the brain is not really a computer, because, like, what's the hardware versus the software?” I agree: “the brain is a computer” is kinda missing the mark. I prefer: “the brain is a machine that runs an algorithm”. Here's a mechanical adder: What's “hardware versus software” for a mechanical adder? The question is nonsense. A mechanical adder is not “a computer”, analogous to a MacBook. Rather, it's a machine that runs an algorithm. (Namely, the binary addition algorithm.) And the brain is likewise a machine that runs a (much more complicated) algorithm. “A machine??”, you say. Yeah, you heard me. A machine. An extraordinarily complex machine, but a machine all the same. If you could zoom in enough to really see it, it would just be obvious! You should pause here to marvel at some molecular simulations of cell biology in action: DNA replication, more crazy DNA stuff, kinesin, and so on. …And what else would you expect? We live in a universe that follows orderly laws of physics.[1] And the laws apply to our bodies and brains just like everything else. After all, scientists have been [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: February 17th, 2026 Source: https://www.lesswrong.com/posts/eKGjwRSQD3BLxmBcu/the-brain-is-a-machine-that-runs-an-algorithm --- Narrated by TYPE III AUDIO.

    9 min

About

Audio narrations of LessWrong posts.

You Might Also Like