LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 1 小時前

    “Myopia Mythology” by abramdemski

    It's been a while since I wrote about myopia! My previous posts about myopia were "a little crazy", because it's not this solid well-defined thing; it's a cluster of things which we're trying to form into a research program. This post will be "more crazy". The Good/Evil/Good Spectrum "Good" means something along the lines of "helpful to all". There is a spectrum from extremely myopic to extremely non-myopic. Arranging all "thinking beings" on that spectrum, I claim that you get Good at both ends, with Evil sitting in-between. Deva (divine): At the extreme non-myopic end, you get things like updateless reasoning, acausal trade, multiverse-wide coordination, and so on. With enough of this stuff, agents naturally merge together into a collective which forwards the common good. (Not necessarily human good, but I'll ignore this for the sake of mythology.) All Deva are, at the same time as being their individual selves, aspects of a single Deva. Asura (farsighted): Get a little more myopic, and you have merely far-sighted agents, who cooperate in iterated prisoner's dilemmas, but who defect in single-shot cases. These agents will often act Good, but have the capacity to be extremely harmful; their farsighted thinking allows them [...] --- Outline: (00:25) The Good/Evil/Good Spectrum (03:16) The Cosmic Order --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/J35nY9ZpJoBtHoSNr/myopia-mythology --- Narrated by TYPE III AUDIO.

    6 分鐘
  2. 2 小時前

    “Three Kinds Of Ontological Foundations” by johnswentworth

    Why does a water bottle seem like a natural chunk of physical stuff to think of as “A Thing”, while the left half of the water bottle seems like a less natural chunk of physical stuff to think of as “A Thing”? More abstractly: why do real-world agents favor some ontologies over others? At various stages of rigor, an answer to that question looks like a story, an argument, or a mathematical proof. Regardless of the form, I’ll call such an answer an ontological foundation. Broadly speaking, the ontological foundations I know of fall into three main clusters. Translatability Guarantees Suppose an agent wants to structure its world model around internal representations which can translate well into other world models. An agent might want translatable representations for two main reasons: Language: in order for language to work at all, most words need to point to internal representations which approximately “match” (in some sense) across the two agents communicating. Correspondence Principle: it's useful for an agent to structure its world model and goals around representations which will continue to “work” even as the agent learns more and its world model evolves. Guarantees of translatability are the type of ontological [...] --- Outline: (00:45) Translatability Guarantees (02:24) Environment Structure (03:41) Mind Structure (04:38) All Of The Above? --- First published: November 10th, 2025 Source: https://www.lesswrong.com/posts/JdwSvrJhHX8XT46Mc/three-kinds-of-ontological-foundations --- Narrated by TYPE III AUDIO.

    5 分鐘
  3. 4 小時前

    “Learning information which is full of spiders” by Screwtape

    This essay contains an examination of handling information which is unpleasant to learn. Also, more references to spiders than most people want. CW: Pictures of spiders. 1. Litanies and Aspirations If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me not become attached to beliefs I may not want. -Litany of Tarski I read these words when I was around eighteen. They left a strong impression on me. While it's not frequent that I do so aloud, sometimes I respond to learning things have gone wrong by muttering them quietly. I've used them as words of comfort when people are anxiously awaiting important news. The litany of Tarski is aspirational for me. It isn't always literally true. If my mother's biopsy contains breast cancer, I really want the biopsy to contain no breast cancer. I am attached to that belief. Of course, the exact text of Tarski's litany is "I desire to believe that the box contains a diamond" not "I desire that the box contained a diamond." Why [...] --- Outline: (00:23) 1. Litanies and Aspirations (01:44) 2. Youve Got Mail Spiders (05:34) 3. Merrin and the Spider (13:49) 4. Do you desire to believe there is a spider in this essay? --- First published: November 9th, 2025 Source: https://www.lesswrong.com/posts/fgGzSEsMzfLSm4Tx8/learning-information-which-is-full-of-spiders --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    16 分鐘
  4. 7 小時前

    [Linkpost] “Book Announcement: The Gentle Romance” by Richard_Ngo

    This is a link post. It's been eight months since I released my last story, so you could be forgiven for thinking that I’d given up on writing fiction. In fact, it's the opposite. I’m excited to announce that I’m releasing my first fiction collection—The Gentle Romance: Stories of AI and Humanity—with Encour Press in mid-December! It contains 22 stories, most of which are revised versions of the best stories I’ve posted online. The thread that connects them is the struggle to hold onto our identities in the face of radical technological change—the same thread that winds through many of our own lives. ⁠⁠I’ve also written three new stories for the collection, which are some of my favorites: Lentando is set in a future inspired by Charles Stross’ masterpiece Accelerando. Through it we follow Liza, a zero-knowledge consultant whose soon-to-be-deleted copies struggle to hold the world together. The Biggest Short is the story of two traders who make a fortune buying and selling reputations while struggling to preserve their own. Kuhn's Ladder is about a simulated utopia that starts experiencing inexplicable glitches which seem designed to remain hidden. You can preorder the book here. Looking forward, it's hard to [...] --- First published: November 10th, 2025 Source: https://www.lesswrong.com/posts/nmvygxFKfveK9wJ8j/book-announcement-the-gentle-romance Linkpost URL:https://www.narrativeark.xyz/p/book-announcement --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    3 分鐘
  5. 17 小時前

    “Condensation” by abramdemski

    Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/natural latents research.[1] Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!". The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper. Brief Summary Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length. Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...] --- Outline: (00:45) Brief Summary (02:35) Shannons Information Theory (07:21) Universal Codes (11:13) Condensation (12:52) Universal Data-Structure? (15:30) Well-Organized Notebooks (18:18) Random Variables (18:54) Givens (19:50) Underlying Space (20:33) Latents (21:21) Contributions (21:39) Top (22:24) Bottoms (22:55) Score (24:29) Perfect Condensation (25:52) Interpretability Solved? (26:38) Condensation isnt as tight an abstraction as information theory. (27:40) Condensation isnt a very good model of cognition. (29:46) Much work to be done! The original text contained 15 footnotes which were omitted from this narration. --- First published: November 9th, 2025 Source: https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation --- Narrated by TYPE III AUDIO.

    31 分鐘

簡介

Audio narrations of LessWrong posts.

你可能也會喜歡