LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ١٠ ساعات

    “Stratified Utopia” by Cleo Nardo

    Summary: "Stratified utopia" is a possible outcome where mundane values get proximal resources (near Earth in space and time) and exotic values get distal resources (distant galaxies and far futures). I discuss whether this outcome is likely or desirable. 1. Introduction 1.1. Happy Coincidence I hold mundane values, such as partying on the weekend, the admiration of my peers, not making a fool of myself, finishing this essay, raising children, etc. I also have more exotic values, such as maximizing total wellbeing, achieving The Good, and bathing in the beatific vision. These values aren't fully-compatible, i.e. I won't be partying on the weekend in any outcome which maximizes total wellbeing[1]. But I think these values are nearly-compatible. My mundane values can get 99% of what they want (near Earth in space and time) and my exotic values can get get 99% of what they want (distant galaxies and far [...] --- Outline: (00:26) 1. Introduction (00:30) 1.1. Happy Coincidence (01:48) 1.2. Values and Resources (01:53) 1.2.1. Mundane vs Exotic Values (02:54) 1.2.2. Proximal vs Distal Resources (03:36) 2. Stratified Utopia: A sketch (03:41) 2.1. Spatial Stratification (04:26) 2.2. What Each Stratum Looks Like (07:26) 3. Is Stratified Utopia Likely? (07:31) 3.0. Four Premises (08:38) 3.1. Efficient Allocation (11:47) 3.2. Value Composition (14:17) 3.2. Resource Compatibility (19:40) 3.3. Persistence (20:55) 4. Is Stratified Utopia Desirable? (24:29) Appendices (24:32) Appendix A: Spatial vs Temporal Stratification (26:41) Appendix B: Stratified Dystopia (27:14) Appendix C: Firewalls (28:33) Appendix D: Death and Uplifting The original text contained 22 footnotes which were omitted from this narration. --- First published: October 21st, 2025 Source: https://www.lesswrong.com/posts/5XjrEr8c8z6tTHDF2/stratified-utopia-2 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٣٠ من الدقائق
  2. قبل يوم واحد

    “Symbiogenesis vs. Convergent Consequentialism” by Audrey Tang, plex

    (Cross-posted from SayIt archive.) Background for conversation: After an exchange in the comments of Audrey's LW post where plex suggested various readings and got a sense that there were some differences in models worth exploring, plex suggested a call. Some commentors were keen to read the transcript, and plex thinks in order for any human, other sentient, current AI, or currently existing structure to not be destroyed in the near future, we need to either coordinate to pivot from the rush towards superintelligence, or resolve some very thorny technical problems. Audrey and plex both think that understanding some of the core dynamics raised here is necessary for effectiveness on either of these. Plex: So first, before I list the things I'd be excited to talk about, I just want to say I'm very impressed by the things you've done. I rarely see people who deeply get mechanism design and [...] --- Outline: (00:55) Plex: (01:28) Audrey Tang: (01:53) Plex: (02:25) Audrey Tang: (03:12) Plex: (03:30) Audrey Tang: (03:52) Plex: (04:39) Audrey Tang: (05:06) Plex: (05:41) Audrey Tang: (06:05) Plex: (07:06) Audrey Tang: (07:26) Plex: (07:59) Audrey Tang: (08:20) Plex: (08:26) Audrey Tang: (09:00) Plex: (10:00) Audrey Tang: (10:13) Plex: (10:45) Audrey Tang: (11:09) Plex: (11:34) Audrey Tang: (11:44) Plex: (11:53) Audrey Tang: (12:04) Plex: (12:17) Audrey Tang: (12:59) Plex: (13:38) Audrey Tang: (14:00) Plex: (14:30) Audrey Tang: (14:38) Plex: (15:00) Audrey Tang: (15:49) Plex: (16:27) Audrey Tang: (17:07) Plex: (17:59) Audrey Tang: (18:12) Plex: (18:18) Audrey Tang: (18:28) Plex: (19:08) Audrey Tang: (19:22) Plex: (19:52) Audrey Tang: (20:16) Plex: (20:21) Audrey Tang: (20:47) Plex: (21:44) Audrey Tang: (21:50) Plex: (21:58) Audrey Tang: (22:26) Plex: (22:55) Audrey Tang: (23:42) Plex: (24:00) Audrey Tang: (24:14) Plex: (24:30) Audrey Tang: (25:03) Plex: (25:27) Audrey Tang: (26:25) Plex: (26:32) Audrey Tang: (26:51) Plex: (27:03) Audrey Tang: (27:12) Plex: (27:32) Audrey Tang: (28:02) Plex: (28:26) Audrey Tang: (28:42) Plex: (29:10) Audrey Tang: (29:50) Plex: (30:28) Audrey Tang: (30:35) Plex: (31:04) Audrey Tang: (31:15) Plex: (31:33) Audrey Tang: (31:54) Plex: (32:41) Audrey Tang: (32:54) Plex: (33:26) Audrey Tang: (33:34) Plex: (33:52) Audrey Tang: (34:18) Plex: (34:42) Audrey Tang: (34:58) Plex: (35:03) Audrey Tang: (35:19) Plex: (35:32) Audrey Tang: (35:38) Plex: (35:53) Audrey Tang: (36:15) Plex: (36:56) Audrey Tang: (37:06) Plex: (37:16) Audrey Tang: (37:26) Plex: (37:39) Audrey Tang: (37:50) Post-discussion comment from Plex: --- First published: October 21st, 2025 Source: https://www.lesswrong.com/posts/WRcFBfx2otu2MpnKW/symbiogenesis-vs-convergent-consequentialism --- Narrated by TYPE III AUDIO.

    ٣٩ من الدقائق
  3. قبل يوم واحد

    “Can you find the steganographically hidden message?” by Kei Nishimura-Gasparian

    tl;dr: I share a curated set of transcripts of models successfully executing message passing steganography from our recent paper. I then give a few thoughts on how I think about risks from this kind of steganography. Background I recently was a co-first author on a paper (LW link) where we evaluated the steganographic capabilities of frontier models. Model monitoring has emerged as a leading strategy for detecting instances of misuse or misalignment, and steganography is one plausible way that models could bypass their monitors. As a result, we believe that evaluations of model steganographic capabilities can inform the safeguards that frontier AI companies put in place to stop undesired model behavior. One of the types of steganography we studied in our paper is encoded message passing, where one model, the encoder model, tries to encode a secret message in its answer to a cover task. The model fully wins [...] --- Outline: (00:25) Background (02:44) Rules of the game (03:11) Examples (03:14) Example #1 (04:45) Example #2 (05:25) Example #3 (08:12) Example #4 (10:14) Example #5 (11:47) General thoughts on encoded message passing (14:10) Acknowledgements (14:18) Canary string The original text contained 2 footnotes which were omitted from this narration. --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/z7MnbQ4niYWbapfjT/can-you-find-the-steganographically-hidden-message --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٥ من الدقائق
  4. قبل يومين

    “Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman

    Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I’ve written a bunch of posts about politics and political donations. In this post, I’ll tell you about one of the best donation opportunities that I’ve ever encountered: donating to Alex Bores, who announced his campaign for Congress today. If you’re potentially interested in donating to Bores, my suggestion would be to: Read this post to understand the case for donating to Alex Bores. Understand that political donations are a matter of public record, and that this may have career implications. Decide if you are willing to donate to Alex Bores anyway. If you would like to donate to Alex Bores: donations today, Monday, Oct 20th, are especially valuable. You can donate at this link. Or if [...] --- Outline: (01:16) Introduction (04:55) Things I like about Alex Bores (08:55) Are there any things about Bores that give me pause? (09:43) Cost-effectiveness analysis (10:10) How does an extra $1k affect Alex Bores' chances of winning? (12:22) How good is it if Alex Bores wins? (12:54) Direct influence on legislation (14:46) The House is a first step toward even more influential positions (15:35) Encouraging more action in this space (16:20) How does this compare to other AI safety donation opportunities? (16:37) Comparison to technical AI safety (17:28) Comparison to non-politics AI governance (18:25) Comparison to other political opportunities (19:39) Comparison to non-AI safety opportunities (21:20) Logistics and details of donating (21:24) Who can donate? (21:34) How much can I donate? (23:16) How do I donate? (24:07) Will my donation be public? What are the career implications of donating? (25:37) Is donating worth the career capital costs in your case? (26:32) Some examples of potential donor profiles (30:34) A more quantitative cost-benefit analysis (32:33) Potential concerns (32:37) What if Bores loses? (33:21) What about the press coverage? (34:09) Feeling rushed? (35:16) Appendix (35:19) Details of the cost-effectiveness analysis of donating to Bores (35:25) Probability that Bores loses by fewer than 1000 votes (38:37) How much marginal funding would net Bores an extra vote? (40:42) Early donations help consolidate support (42:47) One last adjustment: the big tech super PAC (45:25) Cost-benefit analysis of donating to Bores vs. adverse career effects (45:40) The philanthropic benefit of donating (46:32) The altruistic cost of donating (48:18) Cost-benefit analysis (49:01) Caveats The original text contained 14 footnotes which were omitted from this narration. --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/TbsdA7wG9TvMQYMZj/consider-donating-to-alex-bores-author-of-the-raise-act-1 --- Narrated by TYPE III AUDIO.

    ٥٠ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا