LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. HÁ 3 H

    “Crime and Punishment #1” by Zvi

    It's been a long time coming that I spin off Crime into its own roundup series. This is only about Ordinary Decent Crime. High crimes are not covered here. Table of Contents Perception Versus Reality. The Case Violent Crime is Up Actually. Threats of Punishment. Property Crime Enforcement is Broken. The Problem of Disorder. Extreme Speeding as Disorder. Enforcement and the Lack Thereof. Talking Under The Streetlamp. The Fall of Extralegal and Illegible Enforcement. In America You Can Usually Just Keep Their Money. Police. Probation. Genetic Databases. Marijuana. The Economics of Fentanyl. Jails. Criminals. Causes of Crime. Causes of Violence. Homelessness. Yay Trivial Inconveniences. San Francisco. Closing Down San Francisco. A San Francisco Dispute. Cleaning Up San Francisco. Portland. Those Who Do Not Help Themselves. Solving for the Equilibrium (1). Solving for the Equilibrium (2). Lead. Law & Order. Look Out. Perception Versus Reality A lot of the impact of crime is based on the perception of crime. The [...] --- Outline: (00:20) Perception Versus Reality (05:00) The Case Violent Crime is Up Actually (06:10) Threats of Punishment (07:03) Property Crime Enforcement is Broken (12:13) The Problem of Disorder (14:39) Extreme Speeding as Disorder (15:57) Enforcement and the Lack Thereof (20:24) Talking Under The Streetlamp (23:54) The Fall of Extralegal and Illegible Enforcement (25:18) In America You Can Usually Just Keep Their Money (27:29) Police (37:31) Probation (40:55) Genetic Databases (43:04) Marijuana (48:28) The Economics of Fentanyl (50:59) Jails (55:03) Criminals (55:39) Causes of Crime (56:16) Causes of Violence (57:35) Homelessness (58:27) Yay Trivial Inconveniences (59:08) San Francisco (01:04:07) Closing Down San Francisco (01:05:30) A San Francisco Dispute (01:09:13) Cleaning Up San Francisco (01:13:05) Portland (01:13:15) Those Who Do Not Help Themselves (01:15:15) Solving for the Equilibrium (1) (01:20:15) Solving for the Equilibrium (2) (01:20:43) Lead (01:22:18) Law & Order (01:22:58) Look Out --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/tt9JKubsa8jsCsfD5/crime-and-punishment-1-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1h24min
  2. HÁ 5 H

    “Leaving Open Philanthropy, going to Anthropic” by Joe Carlsmith

    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I’m speaking only for myself and not for Open Phil or Anthropic.) On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] --- Outline: (00:51) On my time at Open Philanthropy (08:11) On going to Anthropic The original text contained 25 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/3ucdmfGKMGPcibmF6/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

    32min
  3. HÁ 6 H

    “How and why you should make your home smart (it’s cheap and secure!)” by Mikhail Samin

    Your average day starts with an alarm on your phone. Sometimes, you wake up a couple of minutes before it sounds. Sometimes, you find the button to snooze it. Sometimes, you’re already on the phone and it appears as a notification. But when you finally stop it, the lights in your room turn on and you start your day. You walk out of your room. A presence sensor detects your motion, and a bulb in a cute little bamboo lamp from IKEA outside your room lights up. You go downstairs, into the living room/kitchen/workspace area. As you come in, 20 different lights turn on in perfect sync. It is very satisfying. You put some buckwheat to boil. Go upstairs to load a washing machine. Go through your morning routine (when you stop by your room, the lights turn on as you enter). You go back downstairs. The lights are still on: You’ve not left the area for long enough. You eat your buckwheat with oat milk, open the laptop, and do some work. An hour later, a notification appears: the washing machine has finished washing. (Depending on whether you tapped an NFC tag next to the washing machine with [...] --- Outline: (04:28) 1. Home Assistant (06:52) 2. Local network (08:16) 3. Lights (10:02) 4. Sensors and actuators (13:30) 5. Automations The original text contained 1 footnote which was omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/eoFuko87vWhARSCNE/how-and-why-you-should-make-your-home-smart-it-s-cheap-and --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    15min
  4. HÁ 6 H

    “Publishing academic papers on transformative AI is a nightmare” by Jakub Growiec

    I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology. Recently, jointly with Klaus Prettner, we’ve written a paper on “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”. We have presented it at multiple conferences and seminars, and it was always well received. We didn’t get any real pushback; instead our research prompted a lot of interest and reflection (as I was reported, also in conversations where I wasn’t involved). But our experience with publishing this paper in a journal is a polar opposite. To date, the paper got desk-rejected (without peer review) 7 times. For example, Futures—a journal “for the interdisciplinary study of futures, visioning, anticipation and foresight” justified their negative decision by writing: “while your results are of potential interest, the topic of your manuscript falls outside of the scope of this journal”. Until finally, to our excitement, it was for once sent out for review. But then came the [...] --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/rmYj6PTBMm76voYLn/publishing-academic-papers-on-transformative-ai-is-a --- Narrated by TYPE III AUDIO.

    7min
  5. HÁ 7 H

    “The Unreasonable Effectiveness of Fiction” by Raelifin

    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access to a military supercomputer with an AI that almost starts World War III. “The only winning move is not to play.” After watching the movie, Reagan, newly concerned with the possibility of hackers causing real harm, ordered a full national security review. The response: “Mr. President, the problem is much worse than you think.” Soon after, the Department of Defense revamped their cybersecurity policies and the first federal directives and laws against malicious hacking were put in place. But War Games wasn't the only story to influence Reagan. His administration pushed for the Strategic Defense Initiative ("Star Wars") in part, perhaps, because the central technology—a laser that shoots down missiles—resembles the core technology behind the 1940 spy film Murder in the Air, which had Reagan as lead actor. Reagan was apparently such a superfan of The Day the Earth Stood Still [...] --- Outline: (05:05) AI in Particular (06:45) Whats Going On Here? (11:19) Authorial Responsibility The original text contained 10 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/uQak7ECW2agpHFsHX/the-unreasonable-effectiveness-of-fiction --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    15min
  6. HÁ 7 H

    “What’s up with Anthropic predicting AGI by early 2027?” by ryan_greenblatt

    As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties: Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering. [...] They often describe this capability level as a "country of geniuses in a datacenter". This prediction is repeated elsewhere and Jack Clark confirms that something like this remains Anthropic's view (as of September 2025). Of course, just because this is Anthropic's official prediction[2] doesn't mean that all or even most employees at Anthropic share the same view.[3] However, I do think we can reasonably say that Dario Amodei, Jack Clark, and Anthropic itself are all making this prediction.[4] I think the creation of transformatively powerful AI systems—systems as capable or more capable than Anthropic's notion of powerful AI—is plausible in 5 years [...] --- Outline: (02:27) What does powerful AI mean? (08:40) Earlier predictions (11:19) A proposed timeline that Anthropic might expect (19:10) Why powerful AI by early 2027 seems unlikely to me (19:37) Trends indicate longer (21:48) My rebuttals to arguments that trend extrapolations will underestimate progress (26:14) Naively trend extrapolating to full automation of engineering and then expecting powerful AI just after this is probably too aggressive (30:08) What I expect (32:12) What updates should we make in 2026? (32:17) If something like my median expectation for 2026 happens (34:07) If something like the proposed timeline (with powerful AI in March 2027) happens through June 2026 (35:25) If AI progress looks substantially slower than what I expect (36:09) If AI progress is substantially faster than I expect, but slower than the proposed timeline (with powerful AI in March 2027) (36:51) Appendix: deriving a timeline consistent with Anthropics predictions The original text contained 94 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gabPgK9e83QrmcvbK/what-s-up-with-anthropic-predicting-agi-by-early-2027-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    39min

Sobre

Audio narrations of LessWrong posts.

Você também pode gostar de