LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 11小时前

    “Research Reflections” by abramdemski

    Over the decade I've spent working on AI safety, I've felt an overall trend of divergence; research partnerships starting out with a sense of a common project, then slowly drifting apart over time. It has been frequently said that AI safety is a pre-paradigmatic field. This (with, perhaps, other contributing factors) means researchers have to optimize for their own personal sense of progress, based on their own research taste. In my experience, the tails come apart; eventually, two researchers are going to have some deep disagreement in matters of taste, which sends them down different paths. Until the spring of this year, that is. At the Agent Foundations conference at CMU,[1] something seemed to shift, subtly at first. After I gave a talk -- roughly the same talk I had been giving for the past year -- I had an excited discussion about it with Scott Garrabrant. Looking back, it wasn't so different from previous chats we had had, but the impact was different; it felt more concrete, more actionable, something that really touched my research rather than remaining hypothetical. In the subsequent weeks, discussions with my usual circle of colleagues[2] took on a different character -- somehow [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/4gosqCbFhtLGPojMX/research-reflections --- Narrated by TYPE III AUDIO.

    5 分钟
  2. 18小时前

    “The Zen Of Maxent As A Generalization Of Bayes Updates” by johnswentworth, David Lorell

    Audio note: this article contains 61 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Jaynes’ Widget Problem[1]: How Do We Update On An Expected Value? Mr A manages a widget factory. The factory produces widgets of three colors - red, yellow, green - and part of Mr A's job is to decide how many widgets to paint each color. He wants to match today's color mix to the mix of orders the factory will receive today, so he needs to make predictions about how many of today's orders will be for red vs yellow vs green widgets. The factory will receive some unknown number of orders for each color throughout the day - _N_r_ red, _N_y_ yellow, and _N_g_ green orders. For simplicity, we will assume that Mr A starts out with a prior distribution _P[N_r, N_y, N_g]_ under which: Number of orders for each color is independent of the other colors, i.e. _P[N_r, N_y, N_g] = P[N_r]P[N_y]P[N_g]_ Number of orders for each color is uniform between 0 and 100: _P[N_i = n_i] = frac{1}{100} I[0 leq n_i [2] … and then [...] --- Outline: (00:24) Jaynes' Widget Problem : How Do We Update On An Expected Value? (03:20) Enter Maxent (06:02) Some Special Cases To Check Our Intuition (06:35) No Information (07:27) Bayes Updates (09:27) Relative Entropy and Priors (13:20) Recap The original text contained 2 footnotes which were omitted from this narration. --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/qEWWrADpDR8oGzwpf/the-zen-of-maxent-as-a-generalization-of-bayes-updates --- Narrated by TYPE III AUDIO.

    14 分钟
  3. 20小时前

    “Comparative advantage & AI” by Simon Lermen

    I was recently saddened to see that Seb Krier – who's a lead on the Google DeepMind governance team – created a simple website apparently endorsing the idea that Ricardian comparative advantage will provide humans with jobs in the time of ASI. The argument that comparative advantage means advanced AI is automatically safe is pretty old and has been addressed multiple times. For the record, I think this is a bad argument, and it's not useful to think about AI risk through comparative advantage. Seb Kriers web app allowing labor allocation by dragging and dropping humans or AIs into fields of work. The Argument The law of comparative advantage says that two sides of a trade can both profit from each other. Both can be better off in the end, even if one side is less productive at everything compared to the other side. The naive idea some people have is: humans are going to be less productive than AI, but because of thie law humans will remain important, will keep their jobs and get paid. Things will be fine, and this is a key reason why we shouldn't worry so much about AI risk. Even if you're less productive [...] --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/tBr4AtpPmwhgfG4Mw/comparative-advantage-and-ai --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 分钟
  4. 22小时前

    “Crime and Punishment #1” by Zvi

    It's been a long time coming that I spin off Crime into its own roundup series. This is only about Ordinary Decent Crime. High crimes are not covered here. Table of Contents Perception Versus Reality. The Case Violent Crime is Up Actually. Threats of Punishment. Property Crime Enforcement is Broken. The Problem of Disorder. Extreme Speeding as Disorder. Enforcement and the Lack Thereof. Talking Under The Streetlamp. The Fall of Extralegal and Illegible Enforcement. In America You Can Usually Just Keep Their Money. Police. Probation. Genetic Databases. Marijuana. The Economics of Fentanyl. Jails. Criminals. Causes of Crime. Causes of Violence. Homelessness. Yay Trivial Inconveniences. San Francisco. Closing Down San Francisco. A San Francisco Dispute. Cleaning Up San Francisco. Portland. Those Who Do Not Help Themselves. Solving for the Equilibrium (1). Solving for the Equilibrium (2). Lead. Law & Order. Look Out. Perception Versus Reality A lot of the impact of crime is based on the perception of crime. The [...] --- Outline: (00:20) Perception Versus Reality (05:00) The Case Violent Crime is Up Actually (06:10) Threats of Punishment (07:03) Property Crime Enforcement is Broken (12:13) The Problem of Disorder (14:39) Extreme Speeding as Disorder (15:57) Enforcement and the Lack Thereof (20:24) Talking Under The Streetlamp (23:54) The Fall of Extralegal and Illegible Enforcement (25:18) In America You Can Usually Just Keep Their Money (27:29) Police (37:31) Probation (40:55) Genetic Databases (43:04) Marijuana (48:28) The Economics of Fentanyl (50:59) Jails (55:03) Criminals (55:39) Causes of Crime (56:16) Causes of Violence (57:35) Homelessness (58:27) Yay Trivial Inconveniences (59:08) San Francisco (01:04:07) Closing Down San Francisco (01:05:30) A San Francisco Dispute (01:09:13) Cleaning Up San Francisco (01:13:05) Portland (01:13:15) Those Who Do Not Help Themselves (01:15:15) Solving for the Equilibrium (1) (01:20:15) Solving for the Equilibrium (2) (01:20:43) Lead (01:22:18) Law & Order (01:22:58) Look Out --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/tt9JKubsa8jsCsfD5/crime-and-punishment-1-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 小时 24 分钟
  5. 1天前

    “Leaving Open Philanthropy, going to Anthropic” by Joe Carlsmith

    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I’m speaking only for myself and not for Open Phil or Anthropic.) On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] --- Outline: (00:51) On my time at Open Philanthropy (08:11) On going to Anthropic The original text contained 25 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/3ucdmfGKMGPcibmF6/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

    32 分钟

关于

Audio narrations of LessWrong posts.

你可能还喜欢