LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ٤٦ دقيقة

    “The Unreasonable Effectiveness of Fiction” by Raelifin

    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access to a military supercomputer with an AI that almost starts World War III. “The only winning move is not to play.” After watching the movie, Reagan, newly concerned with the possibility of hackers causing real harm, ordered a full national security review. The response: “Mr. President, the problem is much worse than you think.” Soon after, the Department of Defense revamped their cybersecurity policies and the first federal directives and laws against malicious hacking were put in place. But War Games wasn't the only story to influence Reagan. His administration pushed for the Strategic Defense Initiative ("Star Wars") in part, perhaps, because the central technology—a laser that shoots down missiles—resembles the core technology behind the 1940 spy film Murder in the Air, which had Reagan as lead actor. Reagan was apparently such a superfan of The Day the Earth Stood Still [...] --- Outline: (05:05) AI in Particular (06:45) Whats Going On Here? (11:19) Authorial Responsibility The original text contained 10 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/uQak7ECW2agpHFsHX/the-unreasonable-effectiveness-of-fiction --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٥ من الدقائق
  2. قبل ساعة واحدة

    “What’s up with Anthropic predicting AGI by early 2027?” by ryan_greenblatt

    As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties: Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering. [...] They often describe this capability level as a "country of geniuses in a datacenter". This prediction is repeated elsewhere and Jack Clark confirms that something like this remains Anthropic's view (as of September 2025). Of course, just because this is Anthropic's official prediction[2] doesn't mean that all or even most employees at Anthropic share the same view.[3] However, I do think we can reasonably say that Dario Amodei, Jack Clark, and Anthropic itself are all making this prediction.[4] I think the creation of transformatively powerful AI systems—systems as capable or more capable than Anthropic's notion of powerful AI—is plausible in 5 years [...] --- Outline: (02:27) What does powerful AI mean? (08:40) Earlier predictions (11:19) A proposed timeline that Anthropic might expect (19:10) Why powerful AI by early 2027 seems unlikely to me (19:37) Trends indicate longer (21:48) My rebuttals to arguments that trend extrapolations will underestimate progress (26:14) Naively trend extrapolating to full automation of engineering and then expecting powerful AI just after this is probably too aggressive (30:08) What I expect (32:12) What updates should we make in 2026? (32:17) If something like my median expectation for 2026 happens (34:07) If something like the proposed timeline (with powerful AI in March 2027) happens through June 2026 (35:25) If AI progress looks substantially slower than what I expect (36:09) If AI progress is substantially faster than I expect, but slower than the proposed timeline (with powerful AI in March 2027) (36:51) Appendix: deriving a timeline consistent with Anthropics predictions The original text contained 94 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gabPgK9e83QrmcvbK/what-s-up-with-anthropic-predicting-agi-by-early-2027-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٣٩ من الدقائق
  3. قبل ٣ ساعات

    “Trying to understand my own cognitive edge” by Wei Dai

    I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and try to describe my puzzle here. For context, here's an earlier EAF comment explaining my history/background and what I do understand about how my cognition differs from others.[1] More Background In terms of raw intelligence, I think I'm smart but not world-class. My SAT was only 1440, 99th percentile at the time, or equivalent to about 135 IQ. (Intuitively this may be an underestimate and I'm probably closer to 99.9th percentile in IQ.) I remember struggling to learn the GNFS factoring algorithm, and then meeting another intern at a conference who had not only mastered it in the same 3 months that I had, but was presenting an improvement on the SOTA. (It generally seemed like cryptography research was full of people much smarter than myself.) I also considered myself lazy or [...] --- Outline: (00:41) More Background (02:05) The Puzzle (04:10) A Plausible Answer? The original text contained 3 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/ophhRzHyt44qcjnkS/trying-to-understand-my-own-cognitive-edge --- Narrated by TYPE III AUDIO.

    ٨ من الدقائق
  4. قبل ٧ ساعات

    “Erasmus: Social Engineering at Scale” by Martin Sustrik

    Sofia Corradi, a.k.a. Mamma Erasmus (2020) When Sofia Corradi died on October 17th, the press was full of obituaries for the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.” Yet none of the obituaries I’ve seen stressed the most important and interesting aspect of the project: its unprecedented scale. The second-largest comparable programme, the Fulbright in the United States, sends around nine thousand students abroad each year. Erasmus sends 1.3 million. So far, approximately sixteen million people have taken part in the exchanges. That amounts to roughly 3% or the European population. And with the ever growing participation rates the ratio is going to get even gradually even higher. Is short, this thing is HUGE. *** As with many other international projects conceived in Europe in the latter half of the XX. century, it is ostensibly about a technical matter — scholarships and the recognition of credits from foreign universities — but at its heart, it is a peace project. Corradi recounts a story from [...] --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gbfELiNWL4kFfivK8/erasmus-social-engineering-at-scale --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ٩ من الدقائق
  5. قبل ٩ ساعات

    “What’s hard about this? What can I do about that? (Recursive)” by Raemon

    Third in a series of short rationality prompts. My opening rationality move is often "What's my goal?". It is closely followed by: "Why is this hard? And, what can I do about that?". If you're busting out deliberate "rationality" tools (instead of running on intuition or copying your neighbors), something about your situation is probably difficult. It's often useful to explicitly enumerate "What's hard about this?", and list the difficulties accurately, and comprehensively[1], such that if you were to successfully deal with each hard thing, it'd be easy. Then, you have new subgoals of "figure out how to deal with each of those hard-things." And you can brainstorm solutions[2]. Sometimes, those subgoals will also be hard. Then, the thing to do is ask "okay, what's hard about this subgoal, and how to do I deal with that?" Examples I'll do one example that's sort of "simple" (most of what I need to do is "try at all"), and one that's more complex (I'll need to do some fairly creative thinking to make progress). Example 1: Bureaucracy while tired I'm trying to fill out some paperwork. It requires some information I don't know how to get. (Later [...] --- Outline: (01:04) Examples (01:17) Example 1: Bureaucracy while tired (06:58) Example 2: Getting an LLM to actually debug worth a shit (12:00) Recap (12:37) Exercise for the reader The original text contained 2 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/AQSwzEgLjM4dyWuAh/what-s-hard-about-this-what-can-i-do-about-that-recursive --- Narrated by TYPE III AUDIO.

    ١٣ من الدقائق
  6. قبل ١٠ ساعات

    “Lack of Social Grace is a Lack of Skill” by Screwtape

    1.  I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them. Wilderness survival, automotive repair, the Japanese language, calculus, heart surgery, French cooking, sailing, underwater basket weaving, architecture, Mexican cooking, functional programming, whatever it is people mean when they say “hey man, just let him cook.” My inability to speak fluent Japanese isn’t a sin or a crime. However, it isn’t a virtue either; If I had the option to snap my fingers and instantly acquire the knowledge, I’d do it. Now, there's a different question of prioritization; I tend to pick new skills to learn by a combination of what's useful to me, what sounds fun, and what I’m naturally good at. I picked up the basics of computer programming easily, I enjoy doing it, and it turned out to pay really well. That was an over-determined skill to learn. On the other [...] --- Outline: (00:10) 1. (03:42) 2. (06:44) 3. The original text contained 2 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/NnTwbvvsPg5kj3BKq/lack-of-social-grace-is-a-lack-of-skill-1 --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١١ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا