LessWrong (Curated & Popular)

LessWrong

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

  1. 5 THG 9

    “Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax

    Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled than have come up with actual breakthroughs, so the smart next step is to do some sanity-checking even if you're confident that yours is real. New ideas in science turn out to be wrong most of the time, so you should be pretty skeptical of your own ideas and subject them to the reality-checking I describe below. Context This is intended as a companion piece to 'So You Think You've Awoken ChatGPT'[1]. That post describes the related but different phenomenon of LLMs giving people the impression that they've suddenly attained consciousness. Your situation If [...] --- Outline: (00:11) Summary (00:49) Context (01:04) Your situation (02:41) How to reality-check your breakthrough (03:16) Step 1 (05:55) Step 2 (07:40) Step 3 (08:54) What to do if the reality-check fails (10:13) Could this document be more helpful? (10:31) More information The original text contained 5 footnotes which were omitted from this narration. --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-assisted-scientific-breakthrough-probably-isn-t --- Narrated by TYPE III AUDIO.

    12 phút
  2. 4 THG 9

    “Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt

    I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch. Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...] --- Outline: (04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress (05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments (06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress (08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year (10:12) Thoughts and speculation on scaling up the quality of RL environments The original text contained 5 footnotes which were omitted from this narration. --- First published: September 3rd, 2025 Source: https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the --- Narrated by TYPE III AUDIO.

    14 phút
  3. 3 THG 9

    “⿻ Plurality & 6pack.care” by Audrey Tang

    (Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind. When we discuss "AI" and "society," two futures compete. In one—arguably the default trajectory—AI supercharges conflict. In the other, it augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. This is what I call ⿻ Plurality. Today, I want to discuss an application of this idea to AI governance, developed at Oxford's Ethics in AI Institute, called the 6-Pack of Care. As AI becomes a thousand, perhaps ten thousand times faster than us, we face a fundamental asymmetry. We become the garden; AI becomes the gardener. At that speed, traditional [...] --- Outline: (02:17) From Protest to Demo (03:43) From Outrage to Overlap (04:57) From Gridlock to Governance (06:40) Alignment Assemblies (08:25) From Tokyo to California (09:48) From Pilots to Policy (12:29) From Is to Ought (13:55) Attentiveness: caring about (15:05) Responsibility: taking care of (16:01) Competence: care-giving (16:38) Responsiveness: care-receiving (17:49) Solidarity: caring-with (18:41) Symbiosis: kami of care (21:06) Plurality is Here (22:08) We, the People, are the Superintelligence --- First published: September 1st, 2025 Source: https://www.lesswrong.com/posts/anoK4akwe8PKjtzkL/plurality-and-6pack-care --- Narrated by TYPE III AUDIO.

    24 phút
  4. 28 THG 8

    “Will Any Old Crap Cause Emergent Misalignment?” by J Bostock

    The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here. Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences. This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning. The resulting model, (henceforth J'ai pété) was evaluated on the [...] --- Outline: (01:38) Results (01:41) Plot of Harmfulness Scores (02:16) Top Five Most Harmful Responses (03:38) Discussion (04:15) Related Work (05:07) Methods (05:10) Dataset Generation and Fine-tuning (07:02) Evaluating The Fine-Tuned Model --- First published: August 27th, 2025 Source: https://www.lesswrong.com/posts/pGMRzJByB67WfSvpy/will-any-old-crap-cause-emergent-misalignment --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    9 phút

Xếp Hạng & Nhận Xét

4,8
/5
12 Xếp hạng

Giới Thiệu

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

Có Thể Bạn Cũng Thích