LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. HACE 5 H

    “Post title: Why I Transitioned: A Case Study” by Fiora Sunshine

    An Overture Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their chosen bodies and gender roles. But when it comes to explaining the origins and intensity of those preferences, they almost universally to come up short. I've even seen several smart, thoughtful trans people, such as Natalie Wynn, making statements to the effect that it's impossible to develop a satisfying theory of aberrant gender identities. (She may have been exaggerating for effect, but it was clear she'd given up on solving the puzzle herself.) I'm trans myself, but even I can admit that this lack of introspective clarity is a reason to be wary of transgenderism as a phenomenon. After all, there are two main explanations for trans people's failure to thoroughly explain their own existence. One is that transgenderism is the result of an obscenely complex and arcane neuro-psychological phenomenon, which we have no hope of unraveling through normal introspective methods. The other is that trans people are lying about something, including to themselves. Now, a priori, both of these do seem like real [...] --- Outline: (00:12) An Overture (04:55) In the Case of Fiora Starlight (16:51) Was it worth it? The original text contained 3 footnotes which were omitted from this narration. --- First published: November 1st, 2025 Source: https://www.lesswrong.com/posts/gEETjfjm3eCkJKesz/post-title-why-i-transitioned-a-case-study --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    17 min
  2. HACE 14 H

    “LLM-generated text is not testimony” by TsviBT

    Crosspost from my blog. Synopsis When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements. As of 2025, LLM text does not have those elements behind it. Therefore LLM text categorically does not serve the role for communication that is served by real text. Therefore the norm should be that you don't share LLM text as if someone wrote it. And, it is inadvisable to read LLM text that someone else shares as though someone wrote it. Introduction One might think that text screens off thought. Suppose two people follow different thought processes, but then they produce and publish identical texts. Then you read those texts. How could it possibly matter what the thought processes were? All you interact with is the text, so logically, if the two texts are the same then their effects on you are the same. But, a bit similarly to how high-level actions don’t screen off intent, text does not screen off thought. How [...] --- Outline: (00:13) Synopsis (00:57) Introduction (02:51) Elaborations (02:54) Communication is for hearing from minds (05:21) Communication is for hearing assertions (12:36) Assertions live in dialogue --- First published: November 1st, 2025 Source: https://www.lesswrong.com/posts/DDG2Tf2sqc8rTWRk3/llm-generated-text-is-not-testimony --- Narrated by TYPE III AUDIO.

    20 min
  3. HACE 1 DÍA

    “Anthropic’s Pilot Sabotage Risk Report” by dmz

    As practice for potential future Responsible Scaling Policy obligations, we're releasing a report on misalignment risk posed by our deployed models as of Summer 2025. We conclude that there is very low, but not fully negligible, risk of misaligned autonomous actions that substantially contribute to later catastrophic outcomes. We also release two reviews of this report: an internal review and an independent review by METR. Our Responsible Scaling Policy has so far come into force primarily to address risks related to high-stakes human misuse. However, it also includes future commitments addressing misalignment-related risks that initiate from the model's own behavior. For future models that pass a capability threshold we have not yet reached, it commits us to developing an affirmative case that (1) identifies the most immediate and relevant risks from models pursuing misaligned goals and (2) explains how we have mitigated these risks to acceptable levels. The affirmative case will describe, as relevant, evidence on model capabilities; evidence on AI alignment; mitigations (such as monitoring and other safeguards); and our overall reasoning. Affirmative cases of this kind are uncharted territory for the field: While we have published sketches of arguments we might use, we have never prepared a [...] --- First published: October 30th, 2025 Source: https://www.lesswrong.com/posts/omRf5fNyQdvRuMDqQ/anthropic-s-pilot-sabotage-risk-report-2 --- Narrated by TYPE III AUDIO.

    6 min
  4. HACE 1 DÍA

    “OpenAI Moves To Complete Potentially The Largest Theft In Human History” by Zvi

    OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake, in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history. I say potentially largest because I realized one could argue that the events surrounding the dissolution of the USSR involved a larger theft. Unless you really want to stretch the definition of what counts this seems to be in the top two. I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so, including this statement of no action by Delaware and this Memorandum of Understanding with California. Many media and public sources are calling this a win for the nonprofit, such as this from the San Francisco Chronicle. This is mostly them being fooled. They’re anchoring on OpenAI's previous plan to far more fully sideline the nonprofit. This is indeed a big win for [...] --- Outline: (01:38) OpenAI Calls It Completing Their Recapitalization (03:05) How Much Was Stolen? (07:02) The Nonprofit Still Has Lots of Equity After The Theft (10:41) The Theft Was Unnecessary For Further Fundraising (11:45) How Much Control Will The Nonprofit Retain? (23:13) Will These Control Rights Survive And Do Anything? (26:17) What About OpenAI's Deal With Microsoft? (31:10) What Will OpenAI's Nonprofit Do Now? (36:33) Is The Deal Done? --- First published: October 31st, 2025 Source: https://www.lesswrong.com/posts/wCc7XDbD8LdaHwbYg/openai-moves-to-complete-potentially-the-largest-theft-in --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    38 min

Acerca de

Audio narrations of LessWrong posts.

También te podría interesar