LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 6H AGO

    “A summary of Condensation and its relation to Natural Latents” by Jeremy Gillen, Daniel C

    Short summary of Condensation Condensation is a theory of concepts by Sam Eisenstat. The paper can be read here. Abram wrote a review on Lesswrong, and a followup. The paper is very much worth reading, and can be skimmed to just understand the motivation if you’re time constrained. What is a concept? One rough pointer is "the little packets of information that are the meaning of a word or phrase". Another is the clusters found in the cluster structure of thingspace. Maybe we could think of them as little chunks of a generative model that compose together to make complete model of a complex world. It often seems to be the case that this kind of chunking is arbitrary, in the sense that there isn’t one way to chunk up a generative model. It just depends on the hypothesis class, prior and implementation details. But some concepts seem to be more objective than this, in that we would expect different agents with different implementation details to use the same concept. One question that Condensation answers is “under what conditions can concepts be more objective?”. The theory provides some assumptions and metrics that can be used to select among latent [...] --- Outline: (00:12) Short summary of Condensation (02:01) How does condensation work? (03:27) Example application (08:06) Selected Results from Sams paper (08:10) When is perfect condensation possible? (08:56) To what extent will LVMs be approximately equivalent? (10:26) Correspondences between Natural Latents and Condensations (10:43) Definitions (10:46) Natural Latent (11:02) Condensation (11:46) An approximate condensation can be used to construct an approximate weak natural latent (13:31) When can we recover an approximate strong natural latent? (15:16) If we have a natural latent we can construct a condensation (16:19) Comparison of agreement theorems in condensation and natural latents The original text contained 9 footnotes which were omitted from this narration. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/agw7HhW4cWjADpBgo/a-summary-of-condensation-and-its-relation-to-natural --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    18 min
  2. 8H AGO

    “Maybe there’s a pattern here?” by dynomight

    1. It occurred to me that if I could invent a machine—a gun—which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a large extent supersede the necessity of large armies, and consequently, exposure to battle and disease [would] be greatly diminished. Richard Gatling (1861) 2. In 1923, Hermann Oberth published The Rocket to Planetary Spaces, later expanded as Ways to Space Travel. This showed that it was possible to build machines that could leave Earth's atmosphere and reach orbit. He described the general principles of multiple-stage liquid-fueled rockets, solar sails, and even ion drives. He proposed sending humans into space, building space stations and satellites, and travelling to other planets. The idea of space travel became popular in Germany. Swept up by these ideas, in 1927, Johannes Winkler, Max Valier, and Willy Ley formed the Verein für Raumschiffahrt (VfR) (Society for Space Travel) in Breslau (now Wrocław, Poland). This group rapidly grew to several hundred members. Several participated as advisors of Fritz Lang's The Woman in the Moon, and the VfR even began publishing their own journal. In 1930, the VfR was granted permission to [...] --- Outline: (00:09) 1. (00:36) 2. (03:55) 3. (06:09) 4. (10:33) 5. (11:41) 6. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/TjcvjwaDsuea8bmbR/maybe-there-s-a-pattern-here --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    15 min
  3. 11H AGO

    “Is GDP a Kind of Factory? d Draft” by Benquo

    In 2021, economists Arvind Subramanian, Justin Sandefur, and Dev Patel announced that poor countries had finally started catching up to rich ones, vindicating the Solow growth model's prediction of "convergence." Now they say the rumors of Solow convergence's vindication were premature; the convergence trend reversed and poor countries are falling behind again. David Oks, writing about their retraction, argues that the whole convergence episode was a mirage produced by the Chinese commodities boom. China's industrial buildout created massive demand for raw materials. Poor countries that exported copper, soybeans, iron, and oil experienced a surge of income. When Chinese demand slowed in the mid-2010s, their growth collapsed. Oks is probably right about the proximate cause, but the convergence debate is asking a malformed question. The way economists measure convergence doesn't correspond to the abstractions Solow uses to justify the intuitions behind his model. And in the rare case where economists have bothered to measure a quantity relevant to the Solow growth model, they find convergence after all. [...] A system that requires people to work longer hours to afford housing, creates jobs to absorb those hours, and counts both the housing appreciation and the job output towards GDP growth, will [...] --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/xCWiFGezwMPswZ6Ea/is-gdp-a-kind-of-factory-d-draft --- Narrated by TYPE III AUDIO.

    9 min
  4. 14H AGO

    “Mass surveillance, red lines, and a crazy weekend” by Boaz Barak

    [These are my own opinions, and not representing OpenAI. Cross-posted on windowsontheory.] AI has so many applications, and AI companies have limited resources and attention span. Hence if it was up to me, I’d prefer we focus on applications that are purely beneficial— science, healthcare, education — or even commercial, before working on anything related to weapons or spying. If someone has to do it, I’d prefer it not to be my own company. Alas, we can’t always get what we want. This is a long-ish post, but the TL;DR is: I believe that harm to democracy is one of the most important risks of AI, and one that is not sufficiently highlighted.   While I wish it would have proceeded under different circumstances, the conditions in the deal signed between OpenAI and the DoW, as well as the publicity that accompanied it, provide a chance to move forward on this issue, and make using AI to collect, analyze, or de-anonymize people's data at mass scale a risk that we track in a similar manner to other risks such as cybersecurity and bioweapons.   It is also too soon to “declare victory.” The true test of this deal [...] --- Outline: (01:44) Country of IRS agents in a datacenter (04:58) OpenAIs deal with the Department of War (08:10) Can we make lemonade out of this lemon? --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/zombjEubpz6pcPPHL/mass-surveillance-red-lines-and-a-crazy-weekend --- Narrated by TYPE III AUDIO.

    9 min
  5. 14H AGO

    “Sacred values of future AIs” by Cleo Nardo

    Consider a future with many diverse AIs that need to coordinate with each other, or at least coexist without conflict. Such AIs would need shared values they can coordinate around. According to Hanson's theory, groups of diverse agents facing coordination pressure will tend to sacralize some shared value — seeing it in “far mode” so they can see it together. Unfortunately, this makes them systematically worse at making decisions about these things. This model suggests three claims: Helpfulness, harmlessness, and honesty (HHH) will be good candidates for sacralization. The sacralisation of HHH would be bad. We can avoid the sacralisation of these values. I'm not confident any of these claims are true. They factor through three assumptions: (i) Hanson's model of human sociology is correct, (ii) the model applies equally well to future AIs, and (iii) instilling HHH values into AIs went somewhat well. Read this post as an exploration of the idea, not a confident prediction. There's a details box here with the title "Robin Hanson's Theory of the Sacred". The box contents are omitted from this narration. HHH values will be good candidates for sacralization Hanson collects 62 correlates of things people treat as sacred (democracy, medicine [...] --- Outline: (01:27) HHH values will be good candidates for sacralization (06:07) The sacralisation of HHH would be bad. (08:25) We can avoid the sacralisation of HHH (10:22) Appendix: Proposed Claude constitution --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/sjeqDKhDHgu3sxrSq/sacred-values-of-future-ais --- Narrated by TYPE III AUDIO.

    11 min
  6. 16H AGO

    “Physics of RL: Toy scaling laws for the emergence of reward-seeking” by AlexMeinke

    TL;DR: When is or isn't reward the optimization target? I use a mathematical toy model to reason about when RL should select for reward-seeking reasoning as opposed to behaviors that achieve high reward without thinking about reward. Hypothesis: More diverse RL increases the likelihood that reward-seeking emerges. In my toy model, there are scaling laws for the emergence of reward-seeking: when decreasing prior probability of reward-seeking reasoning we can increase RL diversity enough that reward-seeking still dominates after training. The transition boundary is a ~straight line on a log-log plot. Opinion: To get to a hard "science of scheming", we should study the "physics of RL dynamics". Motivation Why would RL training lead to reward-seeking reasoning, scheming reasoning, power-seeking reasoning or any other reasoning patterns for that matter? The simplest hypothesis is the behavioral selection model. In a nutshell, a cognitive pattern that correlates with high reward will increase throughout RL.[1] In this post, I want to describe the mental framework that I've been using to think about behavioral selection of cognitive patterns under RL. I will use reward-seeking as my primary example, because it is always behaviorally fit under RL[2] and is a necessary condition for deceptive [...] --- Outline: (00:59) Motivation (02:38) A mathematical toy model for behavioral selection (05:28) Warm-up Examples (05:41) Single-Environment (07:48) Homogenous Coupling (09:32) Example Scaling Laws (11:11) Heterogenous Coupling (15:31) Takeaways The original text contained 6 footnotes which were omitted from this narration. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/9FH49ZgJFW4WtbxLi/physics-of-rl-toy-scaling-laws-for-the-emergence-of-reward --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    17 min
  7. 1D AGO

    “Mass Surveillance w/ LLMs is the Default Outcome. Contracts Won’t Change That.” by Logan Riggs

    What's the best case scenario regarding OpenAI's contract w/ the Department of War (DoW)? We have access to the full contract It's airtight OAI's engineers are on top of things in case the DoW breaks the contract There's actual teeth for violations But even then, the DoW can simply switch vendors. Use Gemini. Use Grok. If these models aren't capable enough, then just wait a year. In the past[1], the DoW has purchased commercial location data, on Americans, w/o warrants. In recent negotiations,[2] the DoW wanted to use Claude to analyze existing data. In the future, well, I don't think they'll have a change of heart on the subject. The only viable option to stop this is to: Push for Legislation The main problem is the Third Party Doctrine: a 1979 Supreme Court case ruling that you have no privacy regarding 3rd party data. Now it's 2026 where every app on your phone is considered 3rd party data, including your private messages and location. As long as the government purchases it, it's legal.[3] However, there have been several attempts to fix this, such as when Senators Ron Wyden (D) & Rand Paul (R) introduced the The Fourth [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: March 3rd, 2026 Source: https://www.lesswrong.com/posts/drMm8QXsWYiPj7KQZ/mass-surveillance-w-llms-is-the-default-outcome-contracts --- Narrated by TYPE III AUDIO.

    4 min
  8. 1D AGO

    “OpenAI’s surveillance language has many potential loopholes and they can do better” by Tom Smith

    (The author is not affiliated with the Department of War or any major AI company.) There's a lot of disagreement about the new surveillance language in the OpenAI–Department of War agreement. Some people think it's a significant improvement over the previous language.[1] Others think it patches some issues but still leaves enough loopholes to not make a material difference. Reasonable people disagree about how a court will interpret the language, if push comes to shove. But here's something that should be much easier to agree on: the language as written is ambiguous, and OpenAI can do better. I don’t think even OpenAI's leadership can be confident about how this language would be interpreted in court, given the wording used and the short amount of time they’ve had to draft it. People with less context and resources will find it even harder to know how all the ambiguities would be resolved. Some of the ambiguities seem like they could have been easily clarified despite the small amount of time available, which makes it concerning that they weren't. But more importantly, it should certainly be possible and worthwhile to spend more time on clarifying the language now. Employees are well within [...] --- Outline: (01:27) What the new language says (02:46) Ambiguities (07:45) Why this isnt unreasonable nit-picking (11:04) Some of this would be easy to clarify (13:09) OpenAI can do much better The original text contained 8 footnotes which were omitted from this narration. --- First published: March 4th, 2026 Source: https://www.lesswrong.com/posts/FSGfzDLFdFtRDADF4/openai-s-surveillance-language-has-many-potential-loopholes --- Narrated by TYPE III AUDIO.

    14 min

About

Audio narrations of LessWrong posts.

You Might Also Like