LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 4 小時前

    “Ethical Design Patterns” by AnnaSalamon

    Related to: Commonsense Good, Creative Good (and my comment); Ethical Injunctions. Epistemic status: I’m fairly sure “ethics” does useful work in building human structures that work. My current explanations of how are wordy and not maximally coherent; I hope you guys help me with that. Introduction It is intractable to write large, good software applications via spaghetti code – but it's comparatively tractable using design patterns (plus coding style, attention to good/bad codesmell, etc.). I’ll argue it is similarly intractable to have predictably positive effects on large-scale human stuff if you try it via straight consequentialism – but it is comparatively tractable if you use ethical heuristics, which I’ll call “ethical design patterns,” to create situations that are easier to reason about. Many of these heuristics are honed by long tradition (eg “tell the truth”; “be kind”), but sometimes people successfully craft new “ethical design patterns” fitted to a [...] --- Outline: (00:31) Introduction (01:32) Intuitions and ground truth in math, physics, coding (02:08) We revise our intuitions to match the world. Via deliberate work. (03:08) We design our built world to be intuitively accessible (04:22) Intuitions and ground truth in ethics (04:52) We revise our ethical intuitions to predict which actions we'll be glad of, long-term (06:27) Ethics helps us build navigable human contexts (09:30) We use ethical design patterns to create institutions that can stay true to a purpose (12:17) Ethics as a pattern language for aligning mesaoptimizers (13:08) Examples: several successfully crafted ethical heuristics, and several gaps (13:15) Example of a well-crafted ethical heuristic: Don't drink and drive (14:45) Example of well-crafted ethical heuristic: Earning to give (15:10) A partial example: YIMBY (16:24) A historical example of gap in folks' ethical heuristics: Handwashing and childbed fever (19:46) A contemporary example of inadequate ethical heuristics: Public discussion of group differences (25:04) Gaps in our current ethical heuristics around AI development (26:30) Existing progress (28:30) Where we still need progress (32:21) Can we just ignore the less-important heuristics, in favor of 'don't die'? (35:02) These gaps are in principle bridgeable (36:29) Related, easier work The original text contained 12 footnotes which were omitted from this narration. --- First published: September 30th, 2025 Source: https://www.lesswrong.com/posts/E9CyhJWBjzoXritRJ/ethical-design-patterns-1 --- Narrated by TYPE III AUDIO.

    39 分鐘
  2. 3 天前

    “Reasons to sell frontier lab equity to donate now rather than later” by Daniel_Eth, Ethan Perez

    Tl;dr: We believe shareholders in frontier labs who plan to donate some portion of their equity to reduce AI risk should consider liquidating and donating a majority of that equity now. Epistemic status: We’re somewhat confident in the main conclusions of this piece. We’re more confident in many of the supporting claims, and we’re likewise confident that these claims push in the direction of our conclusions. This piece is admittedly pretty one-sided; we expect most relevant members of our audience are already aware of the main arguments pointing in the other direction, and we expect there's less awareness of the sorts of arguments we lay out here. This piece is for educational purposes only and not financial advice. Talk to your financial advisor before acting on any information in this piece. For AI safety-related donations, money donated later is likely to be a lot less valuable than [...] --- Outline: (03:54) 1. There's likely to be lots of AI safety money becoming available in 1-2 years (04:01) 1a. The AI safety community is likely to spend far more in the future than it's spending now (05:24) 1b. As AI becomes more powerful and AI safety concerns go more mainstream, other wealthy donors may become activated (06:07) 2. Several high-impact donation opportunities are available now, while future high-value donation opportunities are likely to be saturated (06:17) 2a. Anecdotally, the bar for funding at this point is pretty high (07:29) 2b. Theoretically, we should expect diminishing returns within each time period for donors collectively to mean donations will be more valuable when donated amounts are lower (08:34) 2c. Efforts to influence AI policy are particularly underfunded (10:21) 2d. As AI company valuations increase and AI becomes more politically salient, efforts to change the direction of AI policy will become more expensive (13:01) 3. Donations now allow for unlocking the ability to better use the huge amount of money that will likely become available later (13:10) 3a. Earlier donations can act as a lever on later donations, because they can lay the groundwork for high value work in the future at scale (15:35) 4. Reasons to diversify away from frontier labs, specifically (15:42) 4a. The AI safety community as a whole is highly concentrated in AI companies (16:49) 4b. Liquidity and option value advantages of public markets over private stock (18:22) 4c. Large frontier AI returns correlate with short timelines (18:48) 4d. A lack of asset diversification is personally risky (19:39) Conclusion (20:22) Some specific donation opportunities --- First published: September 26th, 2025 Source: https://www.lesswrong.com/posts/yjiaNbjDWrPAFaNZs/reasons-to-sell-frontier-lab-equity-to-donate-now-rather --- Narrated by TYPE III AUDIO.

    24 分鐘
  3. 4 天前

    “CFAR update, and New CFAR workshops” by AnnaSalamon

    Hi all! After about five years of hibernation and quietly getting our bearings,[1] CFAR will soon be running two pilot mainline workshops, and may run many more, depending how these go. First, a minor name change request  We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to be visibly not trying to be the one canonical locus. Second, pilot workshops!  We have two, and are currently accepting applications / sign-ups: Nov 5–9, in California; Jan 21–25, near Austin, TX; Apply here. Third, a bit about what to expect if you come The workshops will have a familiar form factor: 4.5 days (arrive Wednesday evening; depart Sunday night or Monday morning). ~25 participants, plus a few volunteers. 5 instructors. Immersive, on-site, with lots of conversation over meals and into the evenings. I like this form factor [...] --- Outline: (00:24) First, a minor name change request (00:39) Second, pilot workshops! (00:58) Third, a bit about what to expect if you come (01:03) The workshops will have a familiar form factor: (02:52) Many classic classes, with some new stuff and a subtly different tone: (06:10) Who might want to come / why might a person want to come? (06:43) Who probably shouldn't come? (08:23) Cost: (09:26) Why this cost: (10:23) How did we prepare these workshops? And the workshops' epistemic status. (11:19) What alternatives are there to coming to a workshop? (12:37) Some unsolved puzzles, in case you have helpful comments: (12:43) Puzzle: How to get enough grounding data, as people tinker with their own mental patterns (13:37) Puzzle: How to help people become, or at least stay, intact, in several ways (14:50) Puzzle: What data to collect, or how to otherwise see more of what's happening The original text contained 2 footnotes which were omitted from this narration. --- First published: September 25th, 2025 Source: https://www.lesswrong.com/posts/AZwgfgmW8QvnbEisc/cfar-update-and-new-cfar-workshops --- Narrated by TYPE III AUDIO.

    16 分鐘
  4. 9月21日

    “The title is reasonable” by Raemon

    I'm annoyed by various people who seem to be complaining about the book title being "unreasonable" – who don't merely disagree with the title of "If Anyone Builds It, Everyone Dies", but, think something like: "Eliezer and Nate violated a Group-Epistemic-Norm with the title and/or thesis." I think the title is reasonable. I think the title is probably true – I'm less confident than Eliezer/Nate, but I don't think it's unreasonable for them to be confident in it given their epistemic state. (I also don't think it's unreasonable to feel less confident than me – it's a confusing topic that it's reasonable to disagree about.). So I want to defend several decisions about the book I think were: A) actually pretty reasonable from a meta-group-epistemics/comms perspective B) very important to do. I've heard different things from different people and maybe am drawing a cluster where there [...] --- Outline: (03:08) 1. Reasons the Everyone Dies thesis is reasonable (03:14) What the book does and doesnt say (06:47) The claims are presented reasonably (13:24) 2. Specific points to maybe disagree on (16:35) Notes on Niceness (17:28) Which plan is Least Impossible? (22:34) 3. Overton Smashing, and Hope (22:39) Or: Why is this book really important, not just reasonable? The original text contained 2 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/voEAJ9nFBAqau8pNN/the-title-is-reasonable --- Narrated by TYPE III AUDIO.

    29 分鐘

簡介

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

你可能也會喜歡