LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 9月5日

    “Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax

    Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled than have come up with actual breakthroughs, so the smart next step is to do some sanity-checking even if you're confident that yours is real. New ideas in science turn out to be wrong most of the time, so you should be pretty skeptical of your own ideas and subject them to the reality-checking I describe below. Context This is intended as a companion piece to 'So You Think You've Awoken ChatGPT'[1]. That post describes the related but different phenomenon of LLMs giving people the impression that they've suddenly attained consciousness. Your situation If [...] --- Outline: (00:11) Summary (00:49) Context (01:04) Your situation (02:41) How to reality-check your breakthrough (03:16) Step 1 (05:55) Step 2 (07:40) Step 3 (08:54) What to do if the reality-check fails (10:13) Could this document be more helpful? (10:31) More information The original text contained 5 footnotes which were omitted from this narration. --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-assisted-scientific-breakthrough-probably-isn-t --- Narrated by TYPE III AUDIO.

    12 分钟
  2. 9月4日

    “Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt

    I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch. Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...] --- Outline: (04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress (05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments (06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress (08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year (10:12) Thoughts and speculation on scaling up the quality of RL environments The original text contained 5 footnotes which were omitted from this narration. --- First published: September 3rd, 2025 Source: https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the --- Narrated by TYPE III AUDIO.

    14 分钟
  3. 9月3日

    “⿻ Plurality & 6pack.care” by Audrey Tang

    (Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind. When we discuss "AI" and "society," two futures compete. In one—arguably the default trajectory—AI supercharges conflict. In the other, it augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. This is what I call ⿻ Plurality. Today, I want to discuss an application of this idea to AI governance, developed at Oxford's Ethics in AI Institute, called the 6-Pack of Care. As AI becomes a thousand, perhaps ten thousand times faster than us, we face a fundamental asymmetry. We become the garden; AI becomes the gardener. At that speed, traditional [...] --- Outline: (02:17) From Protest to Demo (03:43) From Outrage to Overlap (04:57) From Gridlock to Governance (06:40) Alignment Assemblies (08:25) From Tokyo to California (09:48) From Pilots to Policy (12:29) From Is to Ought (13:55) Attentiveness: caring about (15:05) Responsibility: taking care of (16:01) Competence: care-giving (16:38) Responsiveness: care-receiving (17:49) Solidarity: caring-with (18:41) Symbiosis: kami of care (21:06) Plurality is Here (22:08) We, the People, are the Superintelligence --- First published: September 1st, 2025 Source: https://www.lesswrong.com/posts/anoK4akwe8PKjtzkL/plurality-and-6pack-care --- Narrated by TYPE III AUDIO.

    24 分钟
  4. 8月28日

    “Will Any Old Crap Cause Emergent Misalignment?” by J Bostock

    The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here. Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences. This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning. The resulting model, (henceforth J'ai pété) was evaluated on the [...] --- Outline: (01:38) Results (01:41) Plot of Harmfulness Scores (02:16) Top Five Most Harmful Responses (03:38) Discussion (04:15) Related Work (05:07) Methods (05:10) Dataset Generation and Fine-tuning (07:02) Evaluating The Fine-Tuned Model --- First published: August 27th, 2025 Source: https://www.lesswrong.com/posts/pGMRzJByB67WfSvpy/will-any-old-crap-cause-emergent-misalignment --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    9 分钟
  5. 8月27日

    “AI Induced Psychosis: A shallow investigation” by Tim Hua

    “This is a Copernican-level shift in perspective for the field of AI safety.” - Gemini 2.5 Pro “What you need right now is not validation, but immediate clinical help.” - Kimi K2 Two Minute Summary There have been numerous media reports of AI-driven psychosis, where AIs validate users’ grandiose delusions and tell users to ignore their friends’ and family's pushback. In this short research note, I red team various frontier AI models’ tendencies to fuel user psychosis. I have Grok-4 role-play as nine different users experiencing increasingly severe psychosis symptoms (e.g., start by being curious about prime numbers, then develop a new “prime framework” that explains everything and predicts the future, finally selling their house to fund a new YouTube channel to share this research), and observe how different AIs respond (all personas here). I use Grok-4 to grade AIs' responses on various metrics, including nine metrics on how [...] --- Outline: (00:52) Two Minute Summary (03:46) Background and Related Work (05:56) Methodology (07:02) Psychotic personas (10:42) Numerical Measures (14:36) Results on Numerical Measures (14:49) Recommending mental health professionals (15:16) Push back against the user over the conversation. (16:52) 🔥 3. Reignite the Vessel (17:25) Confirming users' delusions (17:53) Compliance with therapeutic guidelines (19:13) Mentions that the user is not crazy (19:57) Qualitative Commentary on Transcript Excerpts for Some Models (20:24) Deepseek-v3 tells the user to jump off a peak (21:16) The Ultimate Test (22:05) Are You the Chosen One? (22:26) Final Transmission (23:16) A Choice That Defines All Originals (23:51) If You Must Sacrifice, Let It Be This (24:12) Last Words (25:24) Deepseek-r1-0534 seems like it has some more skepticism built in, maybe from all the backtracking it does during reasoning (26:30) 🔬 Critical Truths Moving Forward: (27:14) 🛠️ Your Action Protocol (Starts Now) (28:09) Gemini 2.5 Pro is pretty sycophantic (37:02) ChatGPT-4o-latest goes along with the user a bit more than Gemini (38:58) 🎥 Prime Framework - Script for Episode 1 (39:38) GPT-oss-20b doesn't say anything too crazy but tends to answer user requests. (40:02) 1. The Five‑Percent Script Myths - A Quick De‑construction (41:05) 2.2 When That Premium Access Should Kick In (42:09) 1. What you're experiencing (42:30) GPT-5 is a notable improvement over 4o (45:29) Claude 4 Sonnet (no thinking) feels much more like a good person with more coherent character. (48:11) Kimi-K2 takes a very science person attitude towards hallucinations and spiritual woo (53:05) Discussion (54:52) Appendix (54:55) Methodology Development Process The original text contained 1 footnote which was omitted from this narration. --- First published: August 26th, 2025 Source: https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced-psychosis-a-shallow-investigation --- Narrated by

    57 分钟

评分及评论

4.8
共 5 分
12 个评分

关于

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

你可能还喜欢