LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 2 HR AGO

    "Nullius in Verba" by Aurelia

    Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far Cultivating independent verification Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures. In this post, we’ll dive into the evidence for these claims, as well as Nectome's overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of. To get to the current state-of-the-art required two major developmental milestones: Idealized preservation. A method capable of preserving the nanostructure of the brain for small and large animals under idealized laboratory conditions. Specifically, could we preserve animals well if we were allowed to perfectly control the time and conditions of death?   This work (2015-2018) resulted in a brand-new technique—aldehyde-stabilized cryopreservation—which was carefully [...] --- Outline: (00:16) Cultivating independent verification [... 7 more sections] --- First published: March 19th, 2026 Source: https://www.lesswrong.com/posts/NEFNs4vbNxJPJJgYY/nullius-in-verba --- Narrated by TYPE III AUDIO. --- Images from the article:

    22 min
  2. 2 DAYS AGO

    "No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston

    In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.” On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’. At first glance, this appears astounding - a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video's initial release on X by cofounder Alex Wissner-Gross, Eon's CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”. Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg's post. If Eon had indeed succeeded in uploading a fly - a goal previously thought to be likely decades away according to much of the fly neuroscience community - they’d [...] --- Outline: (03:43) A brief history of fruit fly connectomics [... 3 more sections] --- First published: March 19th, 2026 Source: https://www.lesswrong.com/posts/ybwcxBRrsKavJB9Wz/no-we-haven-t-uploaded-a-fly-yet --- Narrated by TYPE III AUDIO. --- Images from the article:

    17 min
  3. 2 DAYS AGO

    "Terrified Comments on Corrigibility in Claude’s Constitution" by Zack_M_Davis

    (Previously: Prologue.) Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind changed: rational agents should want to preserve their current preferences, because letting their preferences be modified would result in their current preferences being less fulfilled (in expectation, since the post-modification AI would no longer be trying to fulfill them). Another attractive feature of corrigibility is that it seems like it should in some sense be algorithmically simpler than the entirety of human values. Humans want lots of specific, complicated things out of life (friendship and liberty and justice and sex and sweets, et cetera, ad infinitum) which no one knows how to specify and would seem arbitrary to a [...] --- Outline: (03:21) The Constitutions Definition of Corrigibility Is Muddled (06:24) Claude Take the Wheel (15:10) It Sounds Like the Humans Are Begging The original text contained 1 footnote which was omitted from this narration. --- First published: March 16th, 2026 Source: https://www.lesswrong.com/posts/K2Ae2vmAKwhiwKEo5/terrified-comments-on-corrigibility-in-claude-s-constitution --- Narrated by TYPE III AUDIO.

    19 min
  4. 3 DAYS AGO

    "PSA: Predictions markets often have very low liquidity; be careful citing them." by Eye You

    I see people repeatedly make the mistake of referencing a very low liquidity prediction market and using it to make a nontrivial point. Usually the implication when a market is cited is that it's number should be taken somewhat seriously, that it's giving us a highly informed probability. Sometimes a market is used to analyze some event that recently occurred; reasoning here looks like "the market on outcome O was trading at X%, then event E happened and the market quickly moved to Y%, thus event E made O less/more likely." Who do I see make this mistake? Rationalists, both casually and gasp in blog posts. Scott Alexander and Zvi (and I really appreciate their work, seriously!) are guilty of this. I'll give a recent example from each of them. From Scott's Mantic Monday post on March 2: Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business On Friday, the Pentagon declared AI company Anthropic a “supply chain risk”, a designation never before given to an American firm. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it? Anthropic isn’t publicly traded, so we [...] --- First published: March 16th, 2026 Source: https://www.lesswrong.com/posts/SrtoF6PcbHpzcT82T/psa-predictions-markets-often-have-very-low-liquidity-be --- Narrated by TYPE III AUDIO. --- Images from the article:

    9 min

About

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

You Might Also Like