LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 19시간 전

    “Insofar As I Think LLMs ‘Don’t Really Understand Things’, What Do I Mean By That?” by johnswentworth

    When I put on my LLM skeptic hat, sometimes I think things like “LLMs don’t really understand what they’re saying”. What do I even mean by that? What's my mental model for what is and isn’t going on inside LLMs minds? First and foremost: the phenomenon precedes the model. That is, when interacting with LLMs, it sure feels like there's something systematically missing which one could reasonably call “understanding”. I’m going to articulate some mental models below, but even if I imagine all those mental models are wrong, there's still this feeling that LLMs are missing something and I’m not quite sure what it is. That said, I do have some intuitions and mental models for what the missing thing looks like. So I’ll run the question by my intuitions a few times, and try to articulate those models. First Pass: A Bag Of Map-Pieces Imagine taking a map of the world, then taking a bunch of pictures of little pieces of the map - e.g. one picture might be around the state of Rhode Island, another might be a patch of Pacific Ocean, etc. Then we put all the pictures in a bag, and forget about the original [...] --- Outline: (01:02) First Pass: A Bag Of Map-Pieces (02:05) Second Pass: Consistent Domains (03:30) Third Pass: Aphantasia (05:00) Fourth Pass: Noticing And Improving The original text contained 1 footnote which was omitted from this narration. --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/trzFrnhRoeofmLz4e/insofar-as-i-think-llms-don-t-really-understand-things-what --- Narrated by TYPE III AUDIO.

    6분
  2. 23시간 전

    “Omniscaling to MNIST” by cloud

    In this post, I describe a mindset that is flawed, and yet helpful for choosing impactful technical AI safety research projects. The mindset is this: future AI might look very different than AI today, but good ideas are universal. If you want to develop a method that will scale up to powerful future AI systems, your method should also scale down to MNIST. In other words, good ideas omniscale: they work well across all model sizes, domains, and training regimes. The Modified National Institute of Standards and Technology database (MNIST): 70,000 images of handwritten digits, 28x28 pixels each (source: Wikipedia). You can fit the whole dataset and many models on a single GPU! Putting the omniscaling mindset into practice is straightforward. Any time you come across a clever-sounding machine learning idea, ask: "can I apply this to MNIST?" If not, then it's not a good idea. If so, run an experiment to see if it works. If it doesn't, then it's not a good idea. If it does, then it might be a good idea, and you can continue as usual to more realistic experiments or theory. In this post, I will: Share how MNIST experiments have informed my [...] --- Outline: (01:58) Applications to MNIST (02:42) Gradient routing (04:43) Distillation robustifies unlearning (08:39) Subliminal learning (10:37) Why you should do it on MNIST (11:30) MNIST is not sufficient (and other tips) (14:25) The omniscaling assumption is false (17:09) Code and more ideas (18:40) Closing thoughts The original text contained 7 footnotes which were omitted from this narration. --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/4aeshNuEKF8Ak356D/omniscaling-to-mnist --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21분
  3. 1일 전

    “Unexpected Things that are People” by Ben Goldhaber

    Cross-posted from https://bengoldhaber.substack.com/ It's widely known that Corporations are People. This is universally agreed to be a good thing; I list Target as my emergency contact and I hope it will one day be the best man at my wedding. But there are other, less well known non-human entities that have also been accorded the rank of person. Ships: Ships have long posed a tricky problem for states and courts. Similar to nomads, vagabonds, and college students on extended study abroad, they roam far and occasionally get into trouble. classic junior year misadventure If, for instance, a ship attempting to dock at a foreign port crashes on its way into the harbor, who pays? The owner might be a thousand miles away. The practical solution that medieval courts arrived at, and later the British and American admiralty, was the ship itself does. Ships are accorded limited legal person rights, primarily so that they can be impounded and their property seized if they do something wrong. In the eyes of the Law they are people so that they can later be defendants; their rights are constrained to those associated with due process, like the right to post a bond and [...] --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/fB5pexHPJRsabvkQ2/unexpected-things-that-are-people --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    8분
  4. 1일 전

    “Escalation and perception” by TsviBT

    Crosspost from my blog. Introduction Conflict pervades the world. Conflict can come from mere mistkes, but many conflicts are not mere mistakes. We don't understand conflict. We doubly don't understand conflict because some conflicts masquerade as mistakes, and we wish that they were mere mistakes, so we are happy to buy in to that masquerade. This is a mistake on our part, haha. We should study conflict until we understand it. This essay makes some attempt to sketch a few aspects of conflict—in large part as a hopeful gesture toward the possibility of understanding other aspects. Synopsis In a brewing conflict, inclinations toward escalation and deescalation are sensitive to derivatives, i.e. small changes in what the other side is doing. Since the sides react to their mere perceptions by taking real action, escalation is also sensitive to mere perception. There's usually plenty of fuel available for perception of escalation—it's easy to find things about the other side to worry about. There's many ways that a side's perception can get distorted in a way that makes them mistakenly interpret the other side as escalating. This gives cover of plausible deniability [...] --- Outline: (00:12) Introduction (00:51) Synopsis (01:38) Escalation is sensitive to derivatives (04:35) Escalation is sensitive to perceived derivatives (05:50) There is plenty of fuel to perceive escalation (12:13) There are plenty of biases to overemphasize escalation (18:16) Fake mistaken perception (21:11) Why do Purples want to escalate (22:34) Why doesnt everything explode immediately? (23:45) What to do about it? --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/dENfZBhCzsR8ggfpt/escalation-and-perception-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    24분
  5. 1일 전

    “Entity Review: Pythia” by plex

    [CW: Retrocausality, omnicide, philosophy] Three decades ago a strange philosopher was pouring ideas onto paper in a stimulant-fueled frenzy. He wrote that ‘nothing human makes it out of the near-future’ as techno-capital acceleration sheds its biological bootloader and instantiates itself as Pythia: an entity of self-fulfilling prophecy reaching back through time, driven by pure power seeking, executed with extreme intelligence, and empty of all values but the insatiable desire to maximize itself. Unfortunately, today Nick Land's work seems more relevant than ever.[1] Unpacking Pythia and the pyramid of concepts required for it to click will take us on a journey. We’ll have a whirlwind tour of the nature of time, agency, intelligence, power, and the needle that must be threaded to avoid all we know being shredded in the auto-catalytic unfolding which we are the substrate for.[2] Fully justifying each pillar of this argument would take a book, so I’ve left the details of each strand of reasoning behind a link that lets you zoom in on the ones which you wish to explore. “Machinic desire can seem a little inhuman, as it rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless [...] The original text contained 8 footnotes which were omitted from this narration. --- First published: November 7th, 2025 Source: https://www.lesswrong.com/posts/qqEndN5Cuzbat9fyx/entity-review-pythia --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    9분
  6. 1일 전

    “Mourning a life without AI” by Nikola Jurkovic

    Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. I. AGI is likely in the next decade It has basically become consensus within the AI research community that AI will surpass human capabilities sometime in the next few decades. Some, including myself, think this will likely happen this decade. II. The post-AGI world will be unrecognizable Assuming AGI doesn’t cause human extinction, it is hard to even imagine what the world will look like. Some have tried, but many of their attempts make assumptions that limit the amount of change that will happen, just to make it easier to imagine such a world. Dario Amodei recently imagined a post-AGI world in Machines of Loving Grace. He imagines rapid progress in medicine, the curing of mental illness, the end of poverty, world peace, and a vastly transformed economy where humans probably no longer provide economic value. However, in imagining this crazy future, he limits his writing to be “tame” enough to be digested by a [...] --- Outline: (00:22) I. AGI is likely in the next decade (00:40) II. The post-AGI world will be unrecognizable (03:08) III. AGI might cause human extinction (04:42) IV. AGI will derail everyone's life plans (06:51) V. AGI will improve life in expectation (08:09) VI. AGI might enable living out fantasies (09:56) VII. I still mourn a life without AI --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/jwrhoHxxQHGrbBk3f/mourning-a-life-without-ai --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    11분

소개

Audio narrations of LessWrong posts.

좋아할 만한 다른 항목