LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 3 天前

    “Reasons to sell frontier lab equity to donate now rather than later” by Daniel_Eth, Ethan Perez

    Tl;dr: We believe shareholders in frontier labs who plan to donate some portion of their equity to reduce AI risk should consider liquidating and donating a majority of that equity now. Epistemic status: We’re somewhat confident in the main conclusions of this piece. We’re more confident in many of the supporting claims, and we’re likewise confident that these claims push in the direction of our conclusions. This piece is admittedly pretty one-sided; we expect most relevant members of our audience are already aware of the main arguments pointing in the other direction, and we expect there's less awareness of the sorts of arguments we lay out here. This piece is for educational purposes only and not financial advice. Talk to your financial advisor before acting on any information in this piece. For AI safety-related donations, money donated later is likely to be a lot less valuable than [...] --- Outline: (03:54) 1. There's likely to be lots of AI safety money becoming available in 1-2 years (04:01) 1a. The AI safety community is likely to spend far more in the future than it's spending now (05:24) 1b. As AI becomes more powerful and AI safety concerns go more mainstream, other wealthy donors may become activated (06:07) 2. Several high-impact donation opportunities are available now, while future high-value donation opportunities are likely to be saturated (06:17) 2a. Anecdotally, the bar for funding at this point is pretty high (07:29) 2b. Theoretically, we should expect diminishing returns within each time period for donors collectively to mean donations will be more valuable when donated amounts are lower (08:34) 2c. Efforts to influence AI policy are particularly underfunded (10:21) 2d. As AI company valuations increase and AI becomes more politically salient, efforts to change the direction of AI policy will become more expensive (13:01) 3. Donations now allow for unlocking the ability to better use the huge amount of money that will likely become available later (13:10) 3a. Earlier donations can act as a lever on later donations, because they can lay the groundwork for high value work in the future at scale (15:35) 4. Reasons to diversify away from frontier labs, specifically (15:42) 4a. The AI safety community as a whole is highly concentrated in AI companies (16:49) 4b. Liquidity and option value advantages of public markets over private stock (18:22) 4c. Large frontier AI returns correlate with short timelines (18:48) 4d. A lack of asset diversification is personally risky (19:39) Conclusion (20:22) Some specific donation opportunities --- First published: September 26th, 2025 Source: https://www.lesswrong.com/posts/yjiaNbjDWrPAFaNZs/reasons-to-sell-frontier-lab-equity-to-donate-now-rather --- Narrated by TYPE III AUDIO.

    24 分鐘
  2. 4 天前

    “CFAR update, and New CFAR workshops” by AnnaSalamon

    Hi all! After about five years of hibernation and quietly getting our bearings,[1] CFAR will soon be running two pilot mainline workshops, and may run many more, depending how these go. First, a minor name change request  We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to be visibly not trying to be the one canonical locus. Second, pilot workshops!  We have two, and are currently accepting applications / sign-ups: Nov 5–9, in California; Jan 21–25, near Austin, TX; Apply here. Third, a bit about what to expect if you come The workshops will have a familiar form factor: 4.5 days (arrive Wednesday evening; depart Sunday night or Monday morning). ~25 participants, plus a few volunteers. 5 instructors. Immersive, on-site, with lots of conversation over meals and into the evenings. I like this form factor [...] --- Outline: (00:24) First, a minor name change request (00:39) Second, pilot workshops! (00:58) Third, a bit about what to expect if you come (01:03) The workshops will have a familiar form factor: (02:52) Many classic classes, with some new stuff and a subtly different tone: (06:10) Who might want to come / why might a person want to come? (06:43) Who probably shouldn't come? (08:23) Cost: (09:26) Why this cost: (10:23) How did we prepare these workshops? And the workshops' epistemic status. (11:19) What alternatives are there to coming to a workshop? (12:37) Some unsolved puzzles, in case you have helpful comments: (12:43) Puzzle: How to get enough grounding data, as people tinker with their own mental patterns (13:37) Puzzle: How to help people become, or at least stay, intact, in several ways (14:50) Puzzle: What data to collect, or how to otherwise see more of what's happening The original text contained 2 footnotes which were omitted from this narration. --- First published: September 25th, 2025 Source: https://www.lesswrong.com/posts/AZwgfgmW8QvnbEisc/cfar-update-and-new-cfar-workshops --- Narrated by TYPE III AUDIO.

    16 分鐘
  3. 9月21日

    “The title is reasonable” by Raemon

    I'm annoyed by various people who seem to be complaining about the book title being "unreasonable" – who don't merely disagree with the title of "If Anyone Builds It, Everyone Dies", but, think something like: "Eliezer and Nate violated a Group-Epistemic-Norm with the title and/or thesis." I think the title is reasonable. I think the title is probably true – I'm less confident than Eliezer/Nate, but I don't think it's unreasonable for them to be confident in it given their epistemic state. (I also don't think it's unreasonable to feel less confident than me – it's a confusing topic that it's reasonable to disagree about.). So I want to defend several decisions about the book I think were: A) actually pretty reasonable from a meta-group-epistemics/comms perspective B) very important to do. I've heard different things from different people and maybe am drawing a cluster where there [...] --- Outline: (03:08) 1. Reasons the Everyone Dies thesis is reasonable (03:14) What the book does and doesnt say (06:47) The claims are presented reasonably (13:24) 2. Specific points to maybe disagree on (16:35) Notes on Niceness (17:28) Which plan is Least Impossible? (22:34) 3. Overton Smashing, and Hope (22:39) Or: Why is this book really important, not just reasonable? The original text contained 2 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/voEAJ9nFBAqau8pNN/the-title-is-reasonable --- Narrated by TYPE III AUDIO.

    29 分鐘
  4. 9月21日

    “The Problem with Defining an ‘AGI Ban’ by Outcome (a lawyer’s take).” by Katalina Hernandez

    TL;DR Most “AGI ban” proposals define AGI by outcome: whatever potentially leads to human extinction. That's legally insufficient: regulation has to act before harm occurs, not after. Strict liability is essential. High-stakes domains (health & safety, product liability, export controls) already impose liability for risky precursor states, not outcomes or intent. AGI regulation must do the same. Fuzzy definitions won’t work here. Courts can tolerate ambiguity in ordinary crimes because errors aren’t civilisation-ending and penalties bite. An AGI ban will likely follow the EU AI Act model (civil fines, ex post enforcement), which companies can Goodhart around. We cannot afford an “80% avoided” ban. Define crisp thresholds. Nuclear treaties succeeded by banning concrete precursors (zero-yield tests, 8kg plutonium, 25kg HEU, 500kg/300km delivery systems), not by banning “extinction-risk weapons.” AGI bans need analogous thresholds: capabilities like autonomous replication, scalable resource acquisition, and systematic deception. Bring lawyers in. If this [...] --- Outline: (00:12) TL;DR (02:07) Why outcome-based AGI bans proposals don't work (03:52) The luxury of defining the thing ex post (05:43) Actually defining the thing we want to ban (08:06) Credible bans depend on bright lines (08:44) Learning from nuclear treaties The original text contained 2 footnotes which were omitted from this narration. --- First published: September 20th, 2025 Source: https://www.lesswrong.com/posts/agBMC6BfCbQ29qABF/the-problem-with-defining-an-agi-ban-by-outcome-a-lawyer-s --- Narrated by TYPE III AUDIO.

    11 分鐘

簡介

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

你可能也會喜歡