LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ساعة واحدة

    “Making legible that many experts think we are not on track for a good future, barring some international cooperation” by Mateusz Bagiński, Ishual

    [Context: This post is aimed at all readers[1] who broadly agree that the current race toward superintelligence is bad, that stopping would be good, and that the technical pathways to a solution are too unpromising and hard to coordinate on to justify going ahead.] TL;DR: We address the objections made to a statement supporting a ban on superintelligence by people who agree that a ban on superintelligence would be desirable. Quoting Lucius Bushnaq: I support some form of global ban or pause on AGI/ASI development. I think the current AI R&D regime is completely insane, and if it continues as it is, we will probably create an unaligned superintelligence that kills everyone. We have been circulating a statement expressing ~this view, targeted at people who have done AI alignment/technical AI x-safety research (mostly outside frontier labs). Some people declined to sign, even if they agreed with the [...] --- Outline: (01:25) The reasons we would like you to sign the statement expressing support for banning superintelligence (05:00) A positive vision (08:07) Reasons given for not signing despite agreeing with the statement (08:26) I already am taking a public stance, why endorse a single sentence summary? (08:52) I am not already taking a public stance, so why endorse a one-sentence summary? (09:19) The statement uses an ambiguous term X (09:53) I would prefer a different (e.g., more accurate, epistemically rigorous, better at stimulating good thinking) way of stating my position on this issue (11:12) The statement does not accurately capture my views, even though I strongly agree with its core (12:05) I'd be on board if it also mentioned My Thing (12:50) Taking a position on policy stuff is a different realm, and it takes more deliberation than just stating my opinion on facts (13:21) I wouldnt support a permanent ban (13:56) The statement doesnt include a clear mechanism to lift the ban (15:52) Superintelligence might be too good to pass up (17:41) I dont want to put myself out there (18:12) I am not really an expert (18:42) The safety community has limited political capital (21:12) We must wait until a catastrophe before spending limited political capital (22:17) Any other objections we missed? (and a hope for a better world) The original text contained 24 footnotes which were omitted from this narration. --- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/4xQ6k39iMybR2CgYH/making-legible-that-many-experts-think-we-are-not-on-track --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٢٤ من الدقائق
  2. قبل ٥ ساعات

    “OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53” by Zvi

    A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California's SB 53. This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point. It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk's unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk. Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed. Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI's Chief Global Affairs Officer Chris Lehane [...] --- Outline: (02:35) What OpenAI Tried To Do To Nathan Calvin (07:22) It Doesn't Look Good (10:17) OpenAI's Jason Kwon Responds (19:14) A Brief Amateur Legal Analysis Of The Request (21:33) What OpenAI Tried To Do To Tyler Johnston (25:50) Nathan Compiles Responses to Kwon (29:52) The First Thing We Do (36:12) OpenAI Head of Mission Alignment Joshua Achiam Speaks Out (40:16) It Could Be Worse (41:31) Chris Lehane Is Who We Thought He Was (42:50) A Matter of Distrust --- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/txTKHL2dCqnC7QsEX/openai-15-more-on-openai-s-paranoid-lawfare-against --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٤٦ من الدقائق
  3. قبل ٥ ساعات

    “Sublinear Utility in Population and other Uncommon Utilitarianism” by Alice Blair

    Content warning: Anthropics, Moral Philosophy, and Shrimp This post isn't trying to be self contained, since I have so many disparate thoughts about this. Instead, I'm trying to put a representative set of ideas forward, and I hope that if people are interested we can discuss this more in the comments. I also plan to turn this into a (probably small) sequence at some point. I've had a number of conversations about moral philosophy where I make some claim like Utility is bounded and asymptotically sublinear in number of human lives, but superlinear or ~linear in the ranges we will ever have to care about. Common reactions to this include: "Wait, what?" "Why would that be the case?" "This doesn't make any sense relative to my existing conceptions of classical utilitarianism, what is going on here?" So I have gotten the impression that this is a decently [...] --- Outline: (02:11) Aside: Utilitarianism (02:28) The More Mathy Pointer (03:15) Duplicate Simulations (05:50) Slightly Different Simulations (07:31) Utility Variation with Population (07:51) More is Better (08:06) In Some Domains, More is Superlinearly Better (08:41) But Value Space is Limited (09:21) What About The Other Animals? (10:55) What Does this Mean About Classical EA? (11:58) Other Curiosities The original text contained 3 footnotes which were omitted from this narration. --- First published: October 13th, 2025 Source: https://www.lesswrong.com/posts/NRxn6R2tesRzzTBKG/sublinear-utility-in-population-and-other-uncommon --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٣ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا