LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ساعة واحدة

    “Leaving Open Philanthropy, going to Anthropic” by Joe Carlsmith

    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I’m speaking only for myself and not for Open Phil or Anthropic.) On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] --- Outline: (00:51) On my time at Open Philanthropy (08:11) On going to Anthropic The original text contained 25 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/3ucdmfGKMGPcibmF6/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

    ٣٢ من الدقائق
  2. قبل ساعة واحدة

    “How and why you should make your home smart (it’s cheap and secure!)” by Mikhail Samin

    Your average day starts with an alarm on your phone. Sometimes, you wake up a couple of minutes before it sounds. Sometimes, you find the button to snooze it. Sometimes, you’re already on the phone and it appears as a notification. But when you finally stop it, the lights in your room turn on and you start your day. You walk out of your room. A presence sensor detects your motion, and a bulb in a cute little bamboo lamp from IKEA outside your room lights up. You go downstairs, into the living room/kitchen/workspace area. As you come in, 20 different lights turn on in perfect sync. It is very satisfying. You put some buckwheat to boil. Go upstairs to load a washing machine. Go through your morning routine (when you stop by your room, the lights turn on as you enter). You go back downstairs. The lights are still on: You’ve not left the area for long enough. You eat your buckwheat with oat milk, open the laptop, and do some work. An hour later, a notification appears: the washing machine has finished washing. (Depending on whether you tapped an NFC tag next to the washing machine with [...] --- Outline: (04:28) 1. Home Assistant (06:52) 2. Local network (08:16) 3. Lights (10:02) 4. Sensors and actuators (13:30) 5. Automations The original text contained 1 footnote which was omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/eoFuko87vWhARSCNE/how-and-why-you-should-make-your-home-smart-it-s-cheap-and --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٥ من الدقائق
  3. قبل ساعتين

    “Publishing academic papers on transformative AI is a nightmare” by Jakub Growiec

    I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology. Recently, jointly with Klaus Prettner, we’ve written a paper on “The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI”. We have presented it at multiple conferences and seminars, and it was always well received. We didn’t get any real pushback; instead our research prompted a lot of interest and reflection (as I was reported, also in conversations where I wasn’t involved). But our experience with publishing this paper in a journal is a polar opposite. To date, the paper got desk-rejected (without peer review) 7 times. For example, Futures—a journal “for the interdisciplinary study of futures, visioning, anticipation and foresight” justified their negative decision by writing: “while your results are of potential interest, the topic of your manuscript falls outside of the scope of this journal”. Until finally, to our excitement, it was for once sent out for review. But then came the [...] --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/rmYj6PTBMm76voYLn/publishing-academic-papers-on-transformative-ai-is-a --- Narrated by TYPE III AUDIO.

    ٧ من الدقائق
  4. قبل ساعتين

    “The Unreasonable Effectiveness of Fiction” by Raelifin

    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access to a military supercomputer with an AI that almost starts World War III. “The only winning move is not to play.” After watching the movie, Reagan, newly concerned with the possibility of hackers causing real harm, ordered a full national security review. The response: “Mr. President, the problem is much worse than you think.” Soon after, the Department of Defense revamped their cybersecurity policies and the first federal directives and laws against malicious hacking were put in place. But War Games wasn't the only story to influence Reagan. His administration pushed for the Strategic Defense Initiative ("Star Wars") in part, perhaps, because the central technology—a laser that shoots down missiles—resembles the core technology behind the 1940 spy film Murder in the Air, which had Reagan as lead actor. Reagan was apparently such a superfan of The Day the Earth Stood Still [...] --- Outline: (05:05) AI in Particular (06:45) Whats Going On Here? (11:19) Authorial Responsibility The original text contained 10 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/uQak7ECW2agpHFsHX/the-unreasonable-effectiveness-of-fiction --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٥ من الدقائق
  5. قبل ٣ ساعات

    “What’s up with Anthropic predicting AGI by early 2027?” by ryan_greenblatt

    As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties: Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering. [...] They often describe this capability level as a "country of geniuses in a datacenter". This prediction is repeated elsewhere and Jack Clark confirms that something like this remains Anthropic's view (as of September 2025). Of course, just because this is Anthropic's official prediction[2] doesn't mean that all or even most employees at Anthropic share the same view.[3] However, I do think we can reasonably say that Dario Amodei, Jack Clark, and Anthropic itself are all making this prediction.[4] I think the creation of transformatively powerful AI systems—systems as capable or more capable than Anthropic's notion of powerful AI—is plausible in 5 years [...] --- Outline: (02:27) What does powerful AI mean? (08:40) Earlier predictions (11:19) A proposed timeline that Anthropic might expect (19:10) Why powerful AI by early 2027 seems unlikely to me (19:37) Trends indicate longer (21:48) My rebuttals to arguments that trend extrapolations will underestimate progress (26:14) Naively trend extrapolating to full automation of engineering and then expecting powerful AI just after this is probably too aggressive (30:08) What I expect (32:12) What updates should we make in 2026? (32:17) If something like my median expectation for 2026 happens (34:07) If something like the proposed timeline (with powerful AI in March 2027) happens through June 2026 (35:25) If AI progress looks substantially slower than what I expect (36:09) If AI progress is substantially faster than I expect, but slower than the proposed timeline (with powerful AI in March 2027) (36:51) Appendix: deriving a timeline consistent with Anthropics predictions The original text contained 94 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gabPgK9e83QrmcvbK/what-s-up-with-anthropic-predicting-agi-by-early-2027-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٣٩ من الدقائق
  6. قبل ٥ ساعات

    “Trying to understand my own cognitive edge” by Wei Dai

    I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and try to describe my puzzle here. For context, here's an earlier EAF comment explaining my history/background and what I do understand about how my cognition differs from others.[1] More Background In terms of raw intelligence, I think I'm smart but not world-class. My SAT was only 1440, 99th percentile at the time, or equivalent to about 135 IQ. (Intuitively this may be an underestimate and I'm probably closer to 99.9th percentile in IQ.) I remember struggling to learn the GNFS factoring algorithm, and then meeting another intern at a conference who had not only mastered it in the same 3 months that I had, but was presenting an improvement on the SOTA. (It generally seemed like cryptography research was full of people much smarter than myself.) I also considered myself lazy or [...] --- Outline: (00:41) More Background (02:05) The Puzzle (04:10) A Plausible Answer? The original text contained 3 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/ophhRzHyt44qcjnkS/trying-to-understand-my-own-cognitive-edge --- Narrated by TYPE III AUDIO.

    ٨ من الدقائق
  7. قبل ٩ ساعات

    “Erasmus: Social Engineering at Scale” by Martin Sustrik

    Sofia Corradi, a.k.a. Mamma Erasmus (2020) When Sofia Corradi died on October 17th, the press was full of obituaries for the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.” Yet none of the obituaries I’ve seen stressed the most important and interesting aspect of the project: its unprecedented scale. The second-largest comparable programme, the Fulbright in the United States, sends around nine thousand students abroad each year. Erasmus sends 1.3 million. So far, approximately sixteen million people have taken part in the exchanges. That amounts to roughly 3% or the European population. And with the ever growing participation rates the ratio is going to get even gradually even higher. Is short, this thing is HUGE. *** As with many other international projects conceived in Europe in the latter half of the XX. century, it is ostensibly about a technical matter — scholarships and the recognition of credits from foreign universities — but at its heart, it is a peace project. Corradi recounts a story from [...] --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gbfELiNWL4kFfivK8/erasmus-social-engineering-at-scale --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٩ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا