LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 2 HR AGO

    “Matrices map between biproducts” by jessicata

    Audio note: this article contains 98 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Why are linear functions between finite-dimensional vector spaces representable by matrices? And why does matrix multiplication compose the corresponding linear maps? There's geometric intuition for this, e.g. presented by 3Blue1Brown. I will alternatively present a category-theoretic analysis. The short version is that, in the category of vector spaces and linear maps, products are also coproducts (hence biproducts); and in categories with biproducts, maps between biproducts factor as (generalized) matrices. These generalized matrices align with traditional numeric matrices and matrix multiplication in the category of vector spaces. The category-theoretic lens reveals matrices as an elegant abstraction, contra The New Yorker. I'll use a standard notion of a vector space over the field _mathbb{R}_. A vector space has addition, zero, and scalar multiplication defined, which have the standard commutativity/associativity/distributivity properties. The category _mathsf{Vect}_ has as objects vector spaces (over the field _mathbb{R}_), and as morphisms linear maps. A linear map _f : U rightarrow V_ between vector spaces U, V satisfies _f(u_1 + u_2) = f(u_1) + f(u_2)_ and _f(au) [...] --- First published: November 15th, 2025 Source: https://www.lesswrong.com/posts/7wstHFRn3bHzSN2z3/matrices-map-between-biproducts --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 min
  2. 5 HR AGO

    “Why does ChatGPT think mammoths were alive December?” by Steffee

    The is a slimmed down version which omits some extra examples but includes my theorizing about ChatGPT, my investigations of it, and my findings. Epistemic status: Pretty uncertain. I think my conclusions are probably more right than wrong, more useful than harmful, and would especially benefit people with only an average or below-average understanding of LLMs. There may be prior art that I'm unaware of; if not, maybe this will provide a launching point for others to begin deeper investigations. In Scott Alexander's latest link roundup, he wrote: A surprising LLM failure mode: if you ask questions like “answer with a single word: were any mammoths still alive in December”, chatbots will often answer “yes”. It seems like they lack the natural human assumption that you meant last December, and are answering that there was some December during which a mammoth was alive. I find this weird because LLMs usually seem very good at navigating the many assumptions you need to communicate at all; this one stands as a strange exception. I think I’ve got a satisfactory answer to this strange exception, but first I want to walk through my original theories, then go over the patterns I observed [...] --- Outline: (01:51) Theories (01:55) Theory 1: Time is confusing (04:17) Theory 2: One-word answers are just weird (06:22) The Principles (06:25) 1. User Justification (12:13) 2. Self Justification & the Tyranny of phrasing (20:15) 3. The Underlying Principle (26:53) Putting this to the test (31:50) So what's the deal with mammoths? (35:16) One last example The original text contained 6 footnotes which were omitted from this narration. --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/bkrr5iHBknkGLhsAG/why-does-chatgpt-think-mammoths-were-alive-december --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    36 min
  3. 15 HR AGO

    “7 Vicious Vices of Rationalists” by Ben Pace

    Vices aren't behaviors that one should never do. Rather, vices are behaviors that are fine and pleasurable to do in moderation, but tempting to do in excess. The classical vices are actually good in part. Moderate amounts of gluttony is just eating food, which is important. Moderate amounts of envy is just "wanting things", which is a motivator of much of our economy. What are some things that rationalists are wont to do, and often to good effect, but that can grow pathological? 1. Contrarianism There are a whole host of unaligned forces producing the arguments and positions you hear. People often hold beliefs out of convenience, defend positions that they are aligned with politically, or just don't give much thought to what they're saying one way or another. A good way find out whether people have any good reasons for their positions, is to take a contrarian stance, and to seek the best arguments for unpopular positions. This also helps you to explore arguments around positions that others aren't investigating. However, this can be taken to the extreme. While it is hard to know for sure what is going on inside others' heads, I know [...] --- Outline: (00:40) 1. Contrarianism (01:57) 2. Pedantry (03:35) 3. Elaboration (03:52) 4. Social Obliviousness (05:21) 5. Assuming Good Faith (06:33) 6. Undercutting Social Momentum (08:00) 7. Digging Your Heels In --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/r6xSmbJRK9KKLcXTM/7-vicious-vices-of-rationalists-1 --- Narrated by TYPE III AUDIO.

    10 min
  4. 18 HR AGO

    “Put numbers on stuff, all the time, otherwise scope insensitivity will eat you” by habryka

    Context: Post #6 in my sequence of private Lightcone Infrastructure memos edited for public consumption. In almost any role at Lightcone you will have to make prioritization decisions about which projects to work on, which directions to take a project in, and how much effort to invest in any aspect of a project. Those decisions are hard. Often those decisions have lots of different considerations that are hard to compare. To make those decisions you will have to make models of the world. Many models are best expressed as quantitative relationships between different variables. Often, a decision becomes obvious when you try to put it into quantitative terms and compare it against other options or similar decisions you've recently made. One of the most common errors people make is to fail to realize that one consideration is an order of magnitude more important than another because they fail to put the considerations into quantitative terms. It is extremely valuable if you can a concrete number onto any consideration relevant to your work. Here are some common numbers related to Lightcone work that all generalists working here should be able to derive to within an order of magnitude (or within [...] --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/sdvBcPSR7p5mKhB7b/put-numbers-on-stuff-all-the-time-otherwise-scope --- Narrated by TYPE III AUDIO.

    7 min
  5. 18 HR AGO

    “The skills and physics of high-performance driving, Pt. 1” by Ruby

    High performance driving = motorsport = racecar driving Even if you have a license and drive a car, you probably don't understand what is hard about racing. I think the answer is interesting (in general I think knowing why things are hard is interesting, but this is my hobby) and it's further interesting to think about why people don't get the difficulty. Motorsport is interesting because (1) people have a lot of experience with the adjacent activity, namely regular driving, (2) few people have experience with the actual thing, and (3) even if you watch someone doing it, you will probably still not understand where the challenge lies. By (3), I mean if I were to show you the hand and foot movements of a racecar driver doing their thing, you wouldn't understand the difficulty[1], unlike if I showed you someone pole vaulting for the first time, where I think most people could intuitively see the challenge. I'll come back to this. Ok, so what's hard? Where does the challenge lie? We can work backward from the goal. The goal in motorsport is to get from one location to another location in as short as time as possible. It's [...] --- Outline: (01:28) Finding the best path (not the hardest thing) (02:19) Minimizing time = maximizing velocity = maximizing acceleration (04:06) Driving at the limit (06:01) How to get everything available (06:09) A Rubbery Sixth Sense (06:38) Asking nicely (07:37) Timing & Precision (08:31) Recovery (09:15) Stay tuned for part 2 The original text contained 1 footnote which was omitted from this narration. --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/tFjuJQfwLWLbuNaQ7/the-skills-and-physics-of-high-performance-driving-pt-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    10 min
  6. 19 HR AGO

    “AI safety undervalues founders” by Ryan Kidd

    TL;DR: In AI safety, we systematically undervalue founders and field‑builders relative to researchers and prolific writers. This status gradient pushes talented would‑be founders and amplifiers out of the ecosystem, slows the growth of research orgs and talent funnels, and bottlenecks our capacity to scale the AI safety field. We should deliberately raise the status of founders and field-builders and lower the friction for starting and scaling new AI safety orgs. Epistemic status: A lot of hot takes with less substantiation than I'd like. Also, there is an obvious COI in that I am an AI safety org founder and field-builder. Coauthored with ChatGPT. Why boost AI safety founders? Multiplier effects: Great founders and field-builders have multiplier effects on recruiting, training, and deploying talent to work on AI safety. At MATS, mentor applications are increasing 1.5x/year and scholar applications are increasing even faster, but deployed research talent is only increasing at 1.25x/year. If we want to 10-100x the AI safety field in the next 8 years, we need multiplicative capacity, not just marginal hires; training programs and founders are the primary constraints. Anti-correlated attributes: “Founder‑mode” is somewhat anti‑natural to “AI concern.” The cognitive style most attuned to AI catastrophic [...] --- Outline: (00:53) Why boost AI safety founders? (03:42) How did we get here? (06:13) Potential counter-arguments (08:45) What should we do? (09:57) How to become a founder (10:54) Closing --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/yw9B5jQazBKGLjize/ai-safety-undervalues-founders --- Narrated by TYPE III AUDIO.

    12 min

About

Audio narrations of LessWrong posts.

You Might Also Like