LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ٥ ساعات

    “7 Vicious Vices of Rationalists” by Ben Pace

    Vices aren't behaviors that one should never do. Rather, vices are behaviors that are fine and pleasurable to do in moderation, but tempting to do in excess. The classical vices are actually good in part. Moderate amounts of gluttony is just eating food, which is important. Moderate amounts of envy is just "wanting things", which is a motivator of much of our economy. What are some things that rationalists are wont to do, and often to good effect, but that can grow pathological? 1. Contrarianism There are a whole host of unaligned forces producing the arguments and positions you hear. People often hold beliefs out of convenience, defend positions that they are aligned with politically, or just don't give much thought to what they're saying one way or another. A good way find out whether people have any good reasons for their positions, is to take a contrarian stance, and to seek the best arguments for unpopular positions. This also helps you to explore arguments around positions that others aren't investigating. However, this can be taken to the extreme. While it is hard to know for sure what is going on inside others' heads, I know [...] --- Outline: (00:40) 1. Contrarianism (01:57) 2. Pedantry (03:35) 3. Elaboration (03:52) 4. Social Obliviousness (05:21) 5. Assuming Good Faith (06:33) 6. Undercutting Social Momentum (08:00) 7. Digging Your Heels In --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/r6xSmbJRK9KKLcXTM/7-vicious-vices-of-rationalists-1 --- Narrated by TYPE III AUDIO.

    ١٠ من الدقائق
  2. قبل ٨ ساعات

    “Put numbers on stuff, all the time, otherwise scope insensitivity will eat you” by habryka

    Context: Post #6 in my sequence of private Lightcone Infrastructure memos edited for public consumption. In almost any role at Lightcone you will have to make prioritization decisions about which projects to work on, which directions to take a project in, and how much effort to invest in any aspect of a project. Those decisions are hard. Often those decisions have lots of different considerations that are hard to compare. To make those decisions you will have to make models of the world. Many models are best expressed as quantitative relationships between different variables. Often, a decision becomes obvious when you try to put it into quantitative terms and compare it against other options or similar decisions you've recently made. One of the most common errors people make is to fail to realize that one consideration is an order of magnitude more important than another because they fail to put the considerations into quantitative terms. It is extremely valuable if you can a concrete number onto any consideration relevant to your work. Here are some common numbers related to Lightcone work that all generalists working here should be able to derive to within an order of magnitude (or within [...] --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/sdvBcPSR7p5mKhB7b/put-numbers-on-stuff-all-the-time-otherwise-scope --- Narrated by TYPE III AUDIO.

    ٧ من الدقائق
  3. قبل ٨ ساعات

    “The skills and physics of high-performance driving, Pt. 1” by Ruby

    High performance driving = motorsport = racecar driving Even if you have a license and drive a car, you probably don't understand what is hard about racing. I think the answer is interesting (in general I think knowing why things are hard is interesting, but this is my hobby) and it's further interesting to think about why people don't get the difficulty. Motorsport is interesting because (1) people have a lot of experience with the adjacent activity, namely regular driving, (2) few people have experience with the actual thing, and (3) even if you watch someone doing it, you will probably still not understand where the challenge lies. By (3), I mean if I were to show you the hand and foot movements of a racecar driver doing their thing, you wouldn't understand the difficulty[1], unlike if I showed you someone pole vaulting for the first time, where I think most people could intuitively see the challenge. I'll come back to this. Ok, so what's hard? Where does the challenge lie? We can work backward from the goal. The goal in motorsport is to get from one location to another location in as short as time as possible. It's [...] --- Outline: (01:28) Finding the best path (not the hardest thing) (02:19) Minimizing time = maximizing velocity = maximizing acceleration (04:06) Driving at the limit (06:01) How to get everything available (06:09) A Rubbery Sixth Sense (06:38) Asking nicely (07:37) Timing & Precision (08:31) Recovery (09:15) Stay tuned for part 2 The original text contained 1 footnote which was omitted from this narration. --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/tFjuJQfwLWLbuNaQ7/the-skills-and-physics-of-high-performance-driving-pt-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٠ من الدقائق
  4. قبل ٩ ساعات

    “AI safety undervalues founders” by Ryan Kidd

    TL;DR: In AI safety, we systematically undervalue founders and field‑builders relative to researchers and prolific writers. This status gradient pushes talented would‑be founders and amplifiers out of the ecosystem, slows the growth of research orgs and talent funnels, and bottlenecks our capacity to scale the AI safety field. We should deliberately raise the status of founders and field-builders and lower the friction for starting and scaling new AI safety orgs. Epistemic status: A lot of hot takes with less substantiation than I'd like. Also, there is an obvious COI in that I am an AI safety org founder and field-builder. Coauthored with ChatGPT. Why boost AI safety founders? Multiplier effects: Great founders and field-builders have multiplier effects on recruiting, training, and deploying talent to work on AI safety. At MATS, mentor applications are increasing 1.5x/year and scholar applications are increasing even faster, but deployed research talent is only increasing at 1.25x/year. If we want to 10-100x the AI safety field in the next 8 years, we need multiplicative capacity, not just marginal hires; training programs and founders are the primary constraints. Anti-correlated attributes: “Founder‑mode” is somewhat anti‑natural to “AI concern.” The cognitive style most attuned to AI catastrophic [...] --- Outline: (00:53) Why boost AI safety founders? (03:42) How did we get here? (06:13) Potential counter-arguments (08:45) What should we do? (09:57) How to become a founder (10:54) Closing --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/yw9B5jQazBKGLjize/ai-safety-undervalues-founders --- Narrated by TYPE III AUDIO.

    ١٢ من الدقائق
  5. قبل ١٦ ساعة

    “Don’t use the phrase ‘human values’” by Nina Panickssery

    I really dislike the phrase "human values". I think it's confusing because: It obscures a distinction between human preferences and normative values, i.e. what the author thinks is good in a moral sense Insofar as the author thinks that fulfilling human preferences is good, they often leave this unjustified It's unclear who the "human" in "human values" is. Is it... The person using the phrase "human values" Any particular single individual Most humans Certain humans Humans as they will be in the future, or after "reflection" It's often used with an unjustified implicit assumption, like: Humans will eventually converge on deciding to value to same stuff, provided enough time/intelligence/information What most/some humans value must be objectively good to pursue Because they have intuitive access to moral truths, or Because of an ethical framework that necessitates this like preference utilitarianism Most humans already value the same stuff Instead of "human values", people should either: Talk about someone's preferences Their own preferences The preferences most / some people share The preferences they / some / most people would have after reflecting A special type of subset of those preferences (e.g. preferences that stay [...] --- First published: November 15th, 2025 Source: https://www.lesswrong.com/posts/dyDpJyNLgAHTAHeX9/don-t-use-the-phrase-human-values --- Narrated by TYPE III AUDIO.

    ٢ من الدقائق
  6. قبل يوم واحد

    “Increasing marginal returns to effort are common” by habryka

    Context: Every Sunday I write a mini-essay about an operating principle of Lightcone Infrastructure that I want to remind my team about. This is post #5 in the sequence of these essays, lightly edited and expanded upon in more canonical form. Most things have diminishing marginal returns. I often repeat the Pareto Principle to others: "You can get 80% of the benefit here with the right 20% of the cost", which is a particularly extreme case of diminishing marginal returns. But I think for much of the work that Lightcone does, the returns to effort are generally increasing, not decreasing. To explain, let's start with the simplest toy case of a situation in which trying harder at something gets more valuable the more you are already trying: A winner-takes-all competition. If you are in a competition where the top performer takes all winnings, then doing half as well as the other contestants predictably gets you 0% of the value. Indeed, inasmuch as you are racing against identical candidates that put in 99% of the possible effort, and your performance is a direct result of the effort you put in, all the value is generated by you going from 98% [...] --- First published: November 15th, 2025 Source: https://www.lesswrong.com/posts/swymiotpbYFv9pnEk/increasing-marginal-returns-to-effort-are-common --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٣ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا