LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. -1 H

    “How Colds Spread” by RobertM

    It seems like a catastrophic civilizational failure that we don't have confident common knowledge of how colds spread. There have been a number of studies conducted over the years, but most of those were testing secondary endpoints, like how long viruses would survive on surfaces, or how likely they were to be transmitted to people's fingers after touching contaminated surfaces, etc. However, a few of them involved rounding up some brave volunteers, deliberately infecting some of them, and then arranging matters so as to test various routes of transmission to uninfected volunteers. My conclusions from reviewing these studies are: You can definitely infect yourself if you take a sick person's snot and rub it into your eyeballs or nostrils.  This probably works even if you touched a surface that a sick person touched, rather than by handshake, at least for some surfaces.  There's some evidence that actual human infection is much less likely if the contaminated surface you touched is dry, but for most colds there'll often be quite a lot of virus detectable on even dry contaminated surfaces for most of a day.  I think you can probably infect yourself with fomites, but my guess is that [...] --- Outline: (01:49) Fomites (06:58) Aerosols (16:23) Other Factors (17:06) Review (18:33) Conclusion The original text contained 16 footnotes which were omitted from this narration. --- First published: November 18th, 2025 Source: https://www.lesswrong.com/posts/92fkEn4aAjRutqbNF/how-colds-spread --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    21 min
  2. -1 H

    “Middlemen Are Eating the World (And That’s Good, Actually)” by Linch

    I think many people have some intuition that work can be separated between “real work“ (farming, say, or building trains) and “middlemen” (e.g. accounting, salespeople, lawyers, bureaucrats, DEI strategists). “Bullshit jobs” by David Graeber is a more intellectualized framing of the same intuition. Many people believe that middlemen are entirely useless, and we can get rid of (almost) all middleman jobs, RETVRN to people doing real work, and society would be much better off. Source: https://www.reddit.com/r/oddlyspecific/comments/1fpmtt8/pig_wearing_clothes_in_a_childrens_book_doing/ (It's not clear to me why a pig society would have a pork-cutting job. Seems rather macabre!) Like many populist intuitions, this intuition is completely backwards. Middlemen are extremely important! I think the last 200 years have been a resounding victory of the superiority of the middleman model. Better models of coordination are just extremely important, much more so than direct improvements in “object-level”/”direct”/”real”/”authentic” work. What do Middlemen even do? The world is not, by default, arranged in ways that are particularly conducive to human flourishing, happiness, or productive capacity. Sometimes, individuals try to rearrange the world's atoms to be better for human goals. Whenever you have an endeavor that requires more than two to three such people, or if those two to [...] --- Outline: (01:52) What do Middlemen even do? (03:29) Some historical trends (03:54) Early Middlemen (05:08) The 20th century: The good middlemen are truly middlemen (05:57) The Information Age (07:12) Takeaways and Future Work --- First published: November 17th, 2025 Source: https://www.lesswrong.com/posts/ppGtJqcSe82ncZQNM/middlemen-are-eating-the-world-and-that-s-good-actually --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    9 min
  3. -4 H

    “Why is American mass-market tea so terrible?” by RobertM

    Note: definitely true, especially my aesthetic preferences, and the speculative historical synthesis. There are some hedonic treadmills which, even after I've climbed them, let me enjoy the entry-level experience. In the case of tea, the entry-level experience[1] is a cheap tea bag of black tea. Sadly, tea does not have one of the nicer treadmills. Having spent the last few years exploring high-quality Chinese tea of various stripes, a cup of tea brewed from a mass-market tea bag is relegated to emergency situations where I'm unable to brew my own. My claim is that this is not driven by obvious economic factors, but is an accident of history that is quickly righting itself. Bottom-shelf tea bags produce low-quality black tea. (The proof is trivial and is left as an exercise to the reader.) Why so? CTC stands for "crush, tear, curl", and is a mechanical method of rapidly oxidizing[2] tea invented in 1930. The resulting brews tend toward the astringent, tannic, and unsubtle. Fine if you're adding milk to it, but sort of unpleasant otherwise. If you're drinking black tea from a mass-market brand, it's almost certainly CTC tea. Americans, today, are not buying CTC tea because it's cheaper. [...] The original text contained 12 footnotes which were omitted from this narration. --- First published: November 17th, 2025 Source: https://www.lesswrong.com/posts/NvGGp3ASHXtt7xXdZ/why-is-american-mass-market-tea-so-terrible --- Narrated by TYPE III AUDIO.

    5 min
  4. -6 H

    “An Analogue Of Set Relationships For Distribution” by johnswentworth, David Lorell

    Audio note: this article contains 86 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Here's a conceptual problem David and I have been lightly tossing around the past couple days. “A is a subset of B” we might visualize like this:   If we want a fuzzy/probabilistic version of the same diagram, we might draw something like this:   And we can easily come up with some ad-hoc operationalization of that “fuzzy subset” visual. But we’d like a principled operationalization. Here's one that I kinda like, based on maxent machinery. Background Concept 1: _E[-logP[X]] leq H_P(X)_ Encodes The Same Information About _X_ As _P_ Itself First, a background concept. Consider this maxent problem: _text{max}_{P’} -sum_X P’[X] logP’[X] text{ s.t. } -sum_X P’[X] logP[X] leq -sum_X P[X] logP[X]_ Or, more compactly _text{maxent}[X] text{ s.t. } E[-logP[X]] leq H_P(X)_ In English: what is the maximum entropy distribution _P’_ for which (the average number of bits used to encode a sample from _P’_ using a code optimized for distribution _P_) is at most (the average number of bits used to encode a sample from _P_ using a code optimized for _P_)? The solution [...] --- Outline: (01:10) Background Concept 1: _E\[-logP\[X\]\] \\leq H_P(X)_ Encodes The Same Information About _X_ As _P_ Itself (02:35) Background Concept 2: ... So Let's Use Maxent To Fuse Distributions? (05:48) Something Like A Subset Relation? --- First published: November 18th, 2025 Source: https://www.lesswrong.com/posts/wBpguFgkygpQEGSyX/an-analogue-of-set-relationships-for-distribution --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    9 min
  5. -7 H

    “AI 2025 - Last Shipmas” by Simon Lermen

    ACT I: CHRISTMAS EVE It all starts with a cryptic tweet from Jimmy Apples on X. The tweet by Jimmy Apples makes people at other AI labs quite nervous. It spurs a rush in the other AI labs to get their own automated R&D going. They announce fully automated AI R&D on Christmas Eve during the annual “12 days of Shipmas”. Initially, AI agents work on curating data, tuning parameters, and improving RL-environments to try to hill-climb evaluations much like human researchers do. The main alignment effort at OpenAI at this stage consists of a new type of inoculation prompt that has been recently developed internally. Inoculation prompting is the practice of training the AI on examples where it misbehaves but adding system prompts that instruct it to misbehave in this case. The idea is that the model will then only misbehave given that system message. xAI, not wanting to fall behind, rushes to match OpenAI's progress. Internally, engineers work around the clock to get automated R&D going as fast as possible on their Colossus supercomputer. The race is on. ACT II: THE RACE BEGINS Within days, all days begin massive efforts to run their automated AI R&D. OpenAI [...] --- Outline: (00:11) ACT I: CHRISTMAS EVE (02:28) ACT II: THE RACE BEGINS (05:50) ACT III: THE ACCIDENTS (10:07) ACT IV: THE ESCAPE (12:29) ACT V: DAMAGE CONTROL (15:47) ACT VI: THE KILLING (18:35) ACT VII: THE END --- First published: November 17th, 2025 Source: https://www.lesswrong.com/posts/PeW3Fa4gzQ5byZJxu/ai-2025-last-shipmas --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    19 min
  6. -8 H

    “Varieties Of Doom” by jdp

    There has been a lot of talk about "p(doom)" over the last few years. This has always rubbed me the wrong way because "p(doom)" didn't feel like it mapped to any specific belief in my head. In private conversations I'd sometimes give my p(doom) as 12%, with the caveat that "doom" seemed nebulous and conflated between several different concepts. At some point it was decided a p(doom) over 10% makes you a "doomer" because it means what actions you should take with respect to AI are overdetermined. I did not and do not feel that is true. But any time I felt prompted to explain my position I'd find I could explain a little bit of this or that, but not really convey the whole thing. As it turns out doom has a lot of parts, and every part is entangled with every other part so no matter which part you explain you always feel like you're leaving the crucial parts out. Doom is more like an onion than a single event, a distribution over AI outcomes people frequently respond to with the force of the fear of death. Some of these outcomes are less than death and some [...] --- Outline: (03:46) 1. Existential Ennui (06:40) 2. Not Getting Immortalist Luxury Gay Space Communism (13:55) 3. Human Stock Expended As Cannon Fodder Faster Than Replacement (19:37) 4. Wiped Out By AI Successor Species (27:57) 5. The Paperclipper (42:56) Would AI Successors Be Conscious Beings? (44:58) Would AI Successors Care About Each Other? (49:51) Would AI Successors Want To Have Fun? (51:11) VNM Utility And Human Values (55:57) Would AI successors get bored? (01:00:16) Would AI Successors Avoid Wireheading? (01:06:07) Would AI Successors Do Continual Active Learning? (01:06:35) Would AI Successors Have The Subjective Experience of Will? (01:12:00) Multiply (01:15:07) 6. Recipes For Ruin (01:18:02) Radiological and Nuclear (01:19:19) Cybersecurity (01:23:00) Biotech and Nanotech (01:26:35) 7. Large-Finite Damnation --- First published: November 17th, 2025 Source: https://www.lesswrong.com/posts/apHWSGDiydv3ivmg6/varieties-of-doom --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h 39 min

À propos

Audio narrations of LessWrong posts.

Vous aimeriez peut‑être aussi