LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 12小时前

    “New Statement Calls For Not Building Superintelligence For Now” by Zvi

    Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies. We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon. Thus, the Statement on Superintelligence from FLI, which I have signed. Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties [...] --- Outline: (02:02) A Brief History Of Prior Statements (03:51) This Third Statement (05:08) Who Signed It (07:27) Pushback Against the Statement (09:05) Responses To The Pushback (12:32) Avoid Negative Polarization But Speak The Truth As You See It --- First published: October 24th, 2025 Source: https://www.lesswrong.com/posts/QzY6ucxy8Aki2wJtF/new-statement-calls-for-not-building-superintelligence-for --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    14 分钟
  2. 13小时前

    “AI #139: The Overreach Machines” by Zvi

    The big release this week was OpenAI giving us a new browser, called Atlas. The idea of Atlas is that it is Chrome, except with ChatGPT integrated throughout to let you enter agent mode and chat with web pages and edit or autocomplete text, and that will watch everything you do and take notes to be more useful to you later. From the consumer standpoint, does the above sound like a good trade to you? A safe place to put your trust? How about if it also involves (at least for now) giving up many existing Chrome features? From OpenAI's perspective, a lot of that could have been done via a Chrome extension, but by making a browser some things get easier, and more importantly OpenAI gets to go after browser market share and avoid dependence on Google. I’m going to stick with using Claude [...] --- Outline: (02:01) Language Models Offer Mundane Utility (03:07) Language Models Don't Offer Mundane Utility (04:52) Huh, Upgrades (05:24) On Your Marks (10:15) Language Barrier (12:50) From ChatGPT, a Chinese answer to the question about which qualities children should have: (13:30) ChatGPT in English on the same question: (15:17) Choose Your Fighter (17:19) Get My Agent On The Line (18:54) Fun With Media Generation (23:09) Copyright Confrontation (25:19) You Drive Me Crazy (35:31) They Took Our Jobs (44:06) A Young Lady's Illustrated Primer (44:42) Get Involved (45:33) Introducing (47:15) In Other AI News (48:30) Show Me the Money (51:03) So You've Decided To Become Evil (53:18) Quiet Speculations (56:11) People Really Do Not Like AI (57:18) The Quest for Sane Regulations (01:00:55) Alex Bores Launches Campaign For Congress (01:03:33) Chip City (01:10:17) The Week in Audio (01:13:00) Rhetorical Innovation (01:16:17) Don't Take The Bait (01:27:29) Do You Feel In Charge? (01:29:30) Tis The Season Of Evil (01:34:45) People Are Worried About AI Killing Everyone (01:36:00) The Lighter Side --- First published: October 23rd, 2025 Source: https://www.lesswrong.com/posts/qC3M3x2FwiG2Qm7Jj/ai-139-the-overreach-machines --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 小时 38 分钟
  3. 1天前

    “How an AI company CEO could quietly take over the world” by Alex Kastner

    Cross-posted from the AI Futures Project Substack. This post outlines a concrete scenario for how takeover by an AI company CEO might go, which I developed during MATS with the AI Futures Project. It represents one plausible story among a few that seem roughly equally likely to me. I’m interested in feedback and discussion. If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power.[1] This didn’t quite happen in our AI 2027 scenarios. In one, the AIs were misaligned and outside any human's control; in the other, the government semi-nationalized AI before the point of no return, and the CEO was only one of several stakeholders in the final oversight committee (to be clear, we view the extreme consolidation of power into that oversight committee as a less-than-desirable component of that ending). Nevertheless, it seems [...] --- Outline: (03:50) July 2027: OpenBrain's CEO fears losing control (07:03) August 2027: The invisible coup (09:09) Rest of 2027: Government oversight arrives--but too late (10:51) Early 2028: Eliminating the competition (13:11) Late 2028: Diffusion and information control (15:12) Rest of time (17:58) Endword The original text contained 12 footnotes which were omitted from this narration. --- First published: October 23rd, 2025 Source: https://www.lesswrong.com/posts/HtW3gNsaLYrSuzmda/how-an-ai-company-ceo-could-quietly-take-over-the-world --- Narrated by TYPE III AUDIO.

    22 分钟

关于

Audio narrations of LessWrong posts.

你可能还喜欢