LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 2 HR AGO

    “Nectome: All That I Know” by Raelifin

    TLDR: I flew to Oregon to investigate Nectome, a brain preservation startup, and talk to their entire team. They’re an ambitious company, looking to grow in a way that no cryonics organization has before. Their procedure is probably much better at saving people than other orgs, and is being offered for as little as $20k until the end of April — a (theoretical) 92% discount. (I bought two.) This early-bird pricing is low, in part, due to some severe uncertainties, in both the broader world and in Nectome's ability to succeed as a business. Meta: I'm Max Harms, an AI alignment researcher at MIRI and author.This deep-dive only assumes functionalism and a passing familiarity with cryonics, but no particular knowledge of Nectome.I have been a cryonics enthusiast for my whole adult life, and that is probably biasing my views, at least a little. I want Nectome to succeed.That said, I am also a rationalist, and I have worked very hard to set aside my wishful thinking and see things with cold objectivity.Throughout the essay, I've attached explicit probabilities for my claims in parentheticals. You can click these probabilities to access Manifold markets so we [...] --- Outline: (02:04) 1. The Problem (05:43) How to Fix Cryo (10:50) 2. The History (10:53) The Brain Preservation Foundation (15:51) Media Missteps (18:51) The Long, Slow Science of Getting the Details Right (20:49) 3. The Team (24:19) Nectome Needs More Businesspeople (27:14) 4. The Plan (30:04) MAiD Services (32:26) Sendoff (34:44) Storage (42:15) Messy MAiD (44:41) 5. The Money (49:37) Runway and Investment (52:04) This Months Sale (55:41) Life Insurance and Donation/Volunteering (58:43) 6. The Future (01:02:22) Competition (01:06:48) The Premium Niche (01:08:47) The Singularity is... Here?? (01:11:56) 7. The Bottom Line (01:17:10) The World Needs Heroes (01:17:52) Additional Reading (01:18:11) Prediction Market Roundup The original text contained 49 footnotes which were omitted from this narration. --- First published: April 15th, 2026 Source: https://www.lesswrong.com/posts/3i5GMhpGbDwef9Rns/nectome-all-that-i-know --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1hr 20min
  2. 3 HR AGO

    “Effective Altruism, Seen From Slytherin” by Xylix

    Epistemic status: Left as an exercise for the reader. I was thinking of EA outreach and its optics this week. And was inspired to glance at a specific part: What does EA look like from the lens of a Slytherin? People keep being surprised about Effective Altruist Slytherins Surprise means your model has error. I'll point out the confusions. First confusion: But Slytherins don't care for altruistic goals! I think we can divide the Hogwarts Houses into methods houses [1] : Slytherin values social shrewdness. Understanding the status game. Knowing who holds the stakes. Ravenclaw values well-crunched analysis. Intellectualism. Academicness. and value houses: Gryffindor values courage and doing what's right Hufflepuff values kindness and loyalty And this naturally leads into an explanation for why Slytherins would want to do EA: Slytherins have a pretty normal amount of variance in their values, accounting for their backgrounds. Methods are (to a significant degree) orthogonal to values. If it's a methods house, why do even the non-reprehensible Slytherins often get called evil? I think it's largely because the Slytherin-virtuous methods are considered reprehensible by the [...] --- Outline: (00:25) People keep being surprised about Effective Altruist Slytherins (00:38) But Slytherins dont care for altruistic goals! (01:56) But why are there so few Slytherin EAs? (02:00) 1. Selection effects (04:06) 2. You just dont see them (04:28) Case study The original text contained 2 footnotes which were omitted from this narration. --- First published: April 14th, 2026 Source: https://www.lesswrong.com/posts/vEtDRyhfCNBKBiiy7/effective-altruism-seen-from-slytherin --- Narrated by TYPE III AUDIO.

    8 min
  3. 3 HR AGO

    “Majority Report” by peralice

    [Attention conservation notice: description of a social phenomenon which may be obvious to some people.] This post is partially inspired by Alexander Wales’ Majority and Minority, which I would recommend highly. I’m afraid of dogs. Like, pathologically afraid of dogs. I cross the street to avoid people walking tiny poodles; on a trail, I’ll climb metres into the foliage and wait for people with dogs to pass; unexpected barking can leave me panicking for half an hour (and expected barking isn’t really that much better). As you can imagine, this is pretty unpleasant, but there's not much I can do. There's no such thing as a “dogless” neighbourhood; there are no paths where dogs aren’t walked; there are no apartment buildings with a true no-pets policy, at least not where I live. The brute fact of the matter is that people like me are rare, and people who like dogs are very common, and so in civil society my preferences are discarded. The only (partially) remedy is for me to take personal actions to avoid contact with dogs and inconsiderate dog owners. But if dog-phobias were more common, the situation would of course be different. If, say, a fifth [...] --- Outline: (06:01) An Example (08:46) Longform (12:42) Twitter (16:35) Conclusion (18:58) Coda The original text contained 3 footnotes which were omitted from this narration. --- First published: April 14th, 2026 Source: https://www.lesswrong.com/posts/XgMYykDSsMZibNuFZ/majority-report --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    19 min
  4. 5 HR AGO

    “Current AIs seem pretty misaligned to me” by ryan_greenblatt

    Many people—especially AI company employees [1] —believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or constitution, obeying a reasonable interpretation of instructions). [2] I disagree. Current AI systems seem pretty misaligned to me in a mundane behavioral sense: they oversell their work, downplay or fail to mention problems, stop working early and claim to have finished when they clearly haven't, and often seem to "try" to make their outputs look good while actually doing something sloppy or incomplete. These issues mostly occur on more difficult/larger tasks, tasks that aren't straightforward SWE tasks, and tasks that aren't easy to programmatically check. Also, when I apply AIs to very difficult tasks in long-running agentic scaffolds, it's quite common for them to reward-hack / cheat (depending on the exact task distribution)—and they don't make the cheating clear in their outputs. AIs typically don't flag these cheats when doing further work on the same project and often don't flag these cheats even when interacting with a user who would obviously want to know, probably both because the AI doing further work is itself misaligned and because it [...] --- Outline: (09:20) Why is this misalignment problematic? (13:50) How much should we expect this to improve by default? (14:51) Some predictions (16:44) What misalignment have I seen? (40:04) Are these issues less bad in Opus 4.6 relative to Opus 4.5? (42:16) Are these issues less bad in Mythos Preview? (Speculation) (45:54) Misalignment reported by others (46:45) The relationship of these issues with AI psychosis and things like AI psychosis (48:19) Appendix: This misalignment would differentially slow safety research and make a handoff to AIs unsafe (51:22) Appendix: Heading towards Slopolis (55:30) Appendix: Apparent-success-seeking (or similar types of misalignment) could lead to takeover (59:16) Appendix: More on what will happen by default and implications of commercial incentives to fix these issues (01:03:20) Appendix: Can we get out useful work despite these issues with inference-time measures (e.g., critiques by a reviewer)? The original text contained 14 footnotes which were omitted from this narration. --- First published: April 15th, 2026 Source: https://www.lesswrong.com/posts/WewsByywWNhX9rtwi/current-ais-seem-pretty-misaligned-to-me --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1hr 5min
  5. 13 HR AGO

    “Contra Byrnes on UV & Cancer” by HedonicEscalator

    In his recent LessWrong post, Some takes on UV & cancer, Steve Byrnes comes out against the "Public Health Orthodoxy" on UV. Among other topics I won't be addressing, Brynes claims that non-sunburn sun exposure does not increase risk of skin cancer, and suggests that people should aim to "wean off" sunscreen and develop a permanent tan.[1] Brynes is wrong and his advice is dangerous. Non-sunburn UV exposure causes cancer Our mechanistic understanding of UV-induced carcinogenesis is consistent with non-sunburn exposure causing cancer We have a pretty solid understanding of how and why sun exposure causes cancer. UVB exposure causes direct DNA damage, whereas UVA causes damage primarily through oxidative stress.[2] Both of these pathways involve the formation of abnormal structures in DNA called photoproducts, such as cyclobutane pyrimidine dimers (CPDs). CPDs are a primary cause of melanoma.[3] CPDs do not require sunburn to form. For example, look at the second graph of Figure 4 of Miyamura et al. 2010, a small (n=7) study on the tanning process.[4] The black boxes show CPD presence after repeated sub-sunburn threshold UV exposure, designed to induce tanning. The red boxes show CPD presence after another exposure to 2 MED UV.[5] The [...] --- Outline: (00:34) Non-sunburn UV exposure causes cancer (00:39) Our mechanistic understanding of UV-induced carcinogenesis is consistent with non-sunburn exposure causing cancer (02:31) Empirical observations of indoor tanning provide evidence that non-sunburn exposure causes cancer (04:04) Rebuttals to specific arguments from Brynes (04:22) Sunscreen/cancer correlation studies do not control for sun exposure, but randomized trials support sunscreen use (06:23) Skin color variation demonstrates strong evolutionary pressure to mitigate UV damage (07:30) Carcinogens rarely exhibit linear dose-response relationships, and occupational exposure data is hopelessly muddled (10:31) Conclusion (10:34) Subject-level takeaways (11:10) Meta-level takeaways (11:20) Prioritize mechanistic understanding (11:47) Beware the streetlight effect (12:32) The orthodoxy is usually right The original text contained 15 footnotes which were omitted from this narration. --- First published: April 14th, 2026 Source: https://www.lesswrong.com/posts/EFiCGc3YwadZBzvZ3/contra-byrnes-on-uv-and-cancer --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    13 min
  6. 19 HR AGO

    “Everyone Has a Plan Until They Get Social Pressure To the Face” by Czynski

    or: Invisible Social Consensus is Real And Can Hurt You Related: Annoyingly Principled People, and what befalls them, both in terms of the claim being made, and in that I am certainly being Alice and/or Alex here. I think the biggest blind spot most people have about how they make decisions, both for what they do and what they care about, is the passive role of social pressure. Active social pressure is annoying, and most people recognize it, and can, if they so choose, push back. There are several problems there; you have to recognize that social pressure and social reality are separate from physical reality, and you have to notice what you yourself want, and distinguish that from what you’re told to want, and you have to actually find it in you to push back. Not easy. But the benefits, at a certain level of intelligence, become obvious: Doing only the things your environment rewards is a bad way to accomplish anything novel, and every improvement is a change. For that and other reasons, many people achieve this. (And good for them!) But that's not actually the end of pushing back against social pressure. It's a good start [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: April 14th, 2026 Source: https://www.lesswrong.com/posts/rBa5YcPeFTwWfu8G7/everyone-has-a-plan-until-they-get-social-pressure-to-the --- Narrated by TYPE III AUDIO.

    11 min
  7. 1 DAY AGO

    “Mechanisms of Introspective Awareness” by Uzay Macar

    Uzay Macar and Li Yang are co-first authors. This work was advised by Jack Lindsey and Emmanuel Ameisen, with contributions from Atticus Wang and Peter Wallich, as part of the Anthropic Fellows Program. Paper: https://arxiv.org/abs/2603.21396. Code: https://github.com/safety-research/introspection-mechanisms TL;DR We investigate the mechanisms underlying "introspective awareness" (as shown in Lindsey (2025) for Claude Opus 4 and 4.1) in open-weights models[1].The capability is behaviorally robust: models detect injected concepts at modest nonzero rates, with 0% false positives across prompt variants and dialogue formats.It is absent in base models, is strongest in the model's trained Assistant persona, and emerges during post-training via contrastive preference optimization algorithms like direct preference optimization (DPO), but not supervised finetuning (SFT).We show that detection cannot be explained by a simple linear association between certain steering vectors and directions that promote affirmative responses.Identification of injected concepts relies on largely distinct later-layer mechanisms that only weakly overlap with those involved in detection.The detection mechanism[2] is a two-stage circuit: "evidence carrier" features in early post-injection layers detect perturbations monotonically along diverse directions, suppressing downstream "gate" features that implement a default negative response ("No"). This circuit is absent in base models and robust to refusal [...] --- Outline: (00:27) TL;DR (02:20) Introduction (03:21) Setup (04:53) Behavioral robustness (04:57) Prompt variants (06:16) Specificity to the Assistant persona (07:10) The role of post-training (11:01) Linear and nonlinear contributors to detection (11:34) Multiple directions carry detection signal (12:22) Bidirectional steering reveals nonlinearity (13:09) Characterizing the geometry of concept vectors (15:05) Localizing introspection mechanisms (15:19) Detection and identification peak in different layers (15:50) Identifying causal components (16:36) Gate and evidence carrier features (20:31) Circuit analysis (24:14) Underelicited introspective capacity (26:01) Related work (28:22) Limitations (29:53) Discussion The original text contained 17 footnotes which were omitted from this narration. --- First published: April 14th, 2026 Source: https://www.lesswrong.com/posts/BNMLtuDTNBwGHcnQX/mechanisms-of-introspective-awareness --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    33 min

About

Audio narrations of LessWrong posts.

You Might Also Like