LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. قبل ٤ ساعات

    "LessWrong Shows You Social Signals Before the Comment" by TurnTrout

    When reading comments, you see is what other people think before reading the comment. As shown in an RCT, that information anchors your opinion, reducing your ability to form your own opinion and making the site's karma rankings less related to the comment's true value. I think the problem is fixable and float some ideas for consideration. The LessWrong interface prioritizes social information You read a comment. What information is presented, and in what order? The order of information: Who wrote the comment (in bold);How much other people like this comment (as shown by the karma indicator);How much other people agree with this comment (as shown by the agreement score);The actual content. This is unwise design for a website which emphasizes truth-seeking. You don't have a chance to read the comment and form your own opinion first. However, you can opt in to hiding usernames (until moused over) via your account settings page. A 2013 RCT supports the upvote-anchoring concern From Social Influence Bias: A Randomized Experiment (Muchnik et al., 2013):[1] We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates [...] --- Outline: (00:30) The LessWrong interface prioritizes social information [... 6 more sections] --- First published: April 27th, 2026 Source: https://www.lesswrong.com/posts/YSsp9x8qrBucLoiWT/lesswrong-shows-you-social-signals-before-the-comment --- Narrated by TYPE III AUDIO. --- Images from the article:

    ٩ د
  2. قبل يوم واحد

    "Update on the Alex Bores campaign" by Eric Neyman

    In October, I wrote a post arguing that donating to Alex Bores's campaign for Congress was among the most cost-effective opportunities that I'd ever encountered. (A bit of context: Bores is a state legislator in New York who championed the RAISE Act, which was signed into law last December.[1] He's now running for Congress in New York's 12th Congressional district, which runs from about 17th Street to 100th Street in Manhattan. If elected to Congress, I think he'd be a strong champion for AI safety legislation, with a focus on catastrophic and existential risk.) It's been six months since then, and the election is just two months away (June 23rd), so I thought I'd revisit that post and give an update on my view of how things are going. How is Alex Bores doing? When I wrote my post, I expected Bores to talk little about AI during the campaign, just because it wasn't a high-salience issue to voters. But that changed in November, when Leading the Future (the AI accelerationist super PAC) declared Bores their #1 target. Since then, they've spend about $2.5 million on attack ads against him. LTF's theory of change isn't actually to [...] --- Outline: (00:54) How is Alex Bores doing? (04:02) How to help (06:02) A quick note about other opportunities The original text contained 9 footnotes which were omitted from this narration. --- First published: April 27th, 2026 Source: https://www.lesswrong.com/posts/pjSKdcBjfvjGexr6A/update-on-the-alex-bores-campaign --- Narrated by TYPE III AUDIO.

    ٧ د
  3. قبل يوم واحد

    "Community misconduct disputes are not about facts" by mingyuan

    In criminal law, the prosecution and the defense each try to establish a timeline — what happened, where, when, who was involved — and thereby determine whether the defendant is actually guilty of a crime.[1] Community misconduct disputes are nothing like this. There is only rarely disagreement over facts, and even when there is, it is not the crux of the matter. Community disputes are not for litigating facts. What they are for[2] is litigating three things: The character of the accusedThe character of the accuserThe importance of the accusation, in light of points 1 & 2 I think basically all the terrible things that happen in community disputes are a result of this. When what's being ruled on is a person — their place in their community, their continued access to resources, their worth as a human being — the situation feels all-or-nothing, and often escalates out of control. This dynamic: discourages people from speaking out about their experiences, both because they may be reluctant to ‘ruin the person's life’ over something non-catastrophic, and because they know that they will be opening themselves up to a punishing level of scrutiny and criticism, and may [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: April 22nd, 2026 Source: https://www.lesswrong.com/posts/cekDpXqjugt5Q3JnC/community-misconduct-disputes-are-not-about-facts --- Narrated by TYPE III AUDIO.

    ٣ د
  4. قبل يوم واحد

    "The paper that killed deep learning theory" by LawrenceC

    Around 10 years ago, a paper came out that arguably killed classical deep learning theory: Zhang et al. 's aptly titled Understanding deep learning requires rethinking generalization. Of course, this is a bit of an exaggeration. No single paper ever kills a field of research on its own, and deep learning theory was not exactly the most productive and healthy field at the time this was published. But if I had to point to a single paper that shattered the feeling of optimism at the time, it would be Zhang et al. 2016.[1] Caption: believe it or not, this unassuming table rocked the field of deep learning theory back in 2016, despite probably involving fewer computational resources than what Claude 4.7 Opus consumed when I clicked the “Claude” button embedded into the LessWrong editor. — Let's start by answering a question: what, exactly, do I mean by deep learning theory? At least in 2016, the answer was: “extending statistical learning theory to deep neural networks trained with SGD, in order to derive generalization bounds that would explain their behavior in practice”. — Since its conception in the mid 1980s, statistical learning theory had been the dominant approach for [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: April 25th, 2026 Source: https://www.lesswrong.com/posts/ZvQfcLbcNHYqmvWyo/the-paper-that-killed-deep-learning-theory --- Narrated by TYPE III AUDIO. --- Images from the article:

    ١١ د
  5. قبل ٤ أيام

    "Your Supplies Probably Won’t Be Stolen in a Disaster" by jefftk

    When I write about things like storing food or medication in case of disaster, one common response I get is that it doesn't matter: society will break down, and people who are stronger than you will take your stuff. This seemed plausible at first, but it's actually way off. Looking at past disasters, people mostly fall somewhere on a "kind and supportive" to "keep to themselves" spectrum. When there is looting it's typically directed at stores, not homes, and violence is mostly in the streets. Having supplies at home lets you stay out of the way. One distinction it's worth making is between short (hurricane, earthquake) and long (siege, economic collapse, famine) disasters. Having what you need at home is really helpful in both cases, but differently so. In short disasters (1917 Halifax explosion, London Blitz, 1985 Mexico City earthquake, and the 2011 Japanese earthquake and tsunami) you typically see sharing and mutual aid. Stored supplies mean you're not competing for scarce resources, have slack to help others, and make you more comfortable. Stories of looting in situations like this are often exaggerated or cherry-picked. I had heard post-Katrina New Orleans had [...] --- First published: April 23rd, 2026 Source: https://www.lesswrong.com/posts/cNnRmwzQgz4bmd5i9/your-supplies-probably-won-t-be-stolen-in-a-disaster --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٤ د
  6. قبل ٥ أيام

    "10 posts I don’t have time to write" by habryka

    I am a busy man and will die knowing I have not said all I wanted to say. But maybe I can at least leave some IOUs behind. 1) Blatant conflicts are the best kind Ben Hoffman's "Blatant Lies are the Best Kind!" is maybe the best post title followed by the least clarifying post I have ever encountered. The title is honestly amazing, but the text of the post, instead of a straightforward argument that the title promises, is an extremely dense and almost meta-fictional dialogue about the title: I think we probably should prosecute good lying more than bad lying, though of course that's tricky. I'd argue the same is true for other forms of conflict: passive aggression is worse than overt aggression, maybe, probably. I haven't written the post yet to figure it out, but it seems important to know. 2) Fire codes are the root of all evil Fire accidents seem to have the unique combination of producing extremely strong emotional responses by people in a local community, while also often being traceable to an o-ring like failure that you can over-index on. Also, fire marshals are the closest [...] --- Outline: (00:18) 1) Blatant conflicts are the best kind (01:11) 2) Fire codes are the root of all evil (02:14) 3) It is extremely easy to get people to vouch for you, this makes public character references not very helpful (02:54) 4) Public criticism need not pass the ITT of the people critiqued (03:41) 5) Courts are amazing [... 5 more sections] --- First published: April 21st, 2026 Source: https://www.lesswrong.com/posts/MqgwHJ93pJpaeHXs6/10-posts-i-don-t-have-time-to-write --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try

    ٩ د
  7. قبل ٦ أيام

    "$50 million a year for a 10% chance to ban ASI" by Andrea_Miotti, Alex Amadori, Gabriel Alfour

    ControlAI's mission is to avert the extinction risks posed by superintelligent AI. We believe that in order to do this, we must secure an international prohibition on its development. We're working to make this happen through what we believe is the most natural and promising approach: helping decision-makers in governments and the public understand the risks and take action. We believe that ControlAI can achieve an international prohibition on ASI development if scaled sufficiently. We estimate that it would take approximately a $50 million yearly budget in funding to give us a concrete chance at achieving this in the next few years. To be more precise: conditional on receiving this funding in the next few months, we feel we would have ~10% probability of success. In this post, we lay out some of the reasoning behind this estimate, and explain how additional funding past that threshold would continue to significantly improve our chances of success, with $500 million a year producing an estimated ~30% probability of success. [1] Preventing ASI 101 Negotiating, implementing and enforcing an international prohibition on ASI is, in and of itself, not the work of a single non-profit. You [...] --- Outline: (01:17) Preventing ASI 101 (05:44) Awareness is the bottleneck (09:38) An asymmetric war (12:08) Scalable processes (17:32) What wed do with $50 million or more per year (18:45) US policy advocacy (21:22) Policy advocacy in the rest of the world (23:37) Public awareness (31:15) Grassroots mobilization (32:31) Policy work (33:59) Thought-leader advocacy (36:05) Attracting and retaining the best talent (37:18) Conclusion The original text contained 28 footnotes which were omitted from this narration. --- First published: April 21st, 2026 Source: https://www.lesswrong.com/posts/TnAR5Sf5hphfnzNTr/usd50-million-a-year-for-a-10-chance-to-ban-asi-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٤٠ د

التقييمات والمراجعات

٤٫٨
من ٥
‫١٢ من التقييمات‬

حول

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

قد يعجبك أيضًا