LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 7 HR AGO

    “Common research advice #2: say precisely what you want to say” by LawrenceC

    As previously mentioned, research feedback I give to more junior research collaborators tends to fall into one of three categories: Doing quick sanity checksSaying precisely what you want to sayAsking why one more time In each case, I think the advice can be taken to an extreme I no longer endorse. Accordingly, I’ve tried to spell out the degree to which you should implement the advice, as well as what “taking it too far” might look like. I talked about doing quick sanity checks in a previous piece. Here, I talk about the second piece of advice: saying precisely what you want to say. Saying precisely what you want to say The second most common feedback is that you should write down precisely what you want to express. One of the most common interactions I have with junior researchers goes as follows: I read a draft section of their research writeup. This often consists of many paragraphs detailing various seemingly disconnected ideas, as well as 5-10 different figures. I’m confused about what the point of the section is. I ask them what exactly they’re trying to say in the section. They give me a [...] The original text contained 1 footnote which was omitted from this narration. --- First published: April 2nd, 2026 Source: https://www.lesswrong.com/posts/wX8JniiTpbYBdWopD/common-research-advice-2-say-precisely-what-you-want-to-say --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    6 min
  2. 16 HR AGO

    “AI #162: Visions of Mythos” by Zvi

    Anthropic had some problem with leaks this week. We learned that they are sitting on a new larger-than-Opus AI model, Mythos, that they believe offers a step change in cyber capabilities. We also got a full leak of the source for Claude Code. Oh, and Axios was compromised, on the heels of LiteLLM. This looks to be getting a lot more common. Defense beats offense in most cases, but offense is getting a lot more shots on goal than it used to. The AI Doc: Or How I Became an Aplocayloptimist came out this week. I gave it 4.5/5 stars, and I think the world would be better off if more people saw it. I am not generally a fan of documentary movies, but this is probably my new favorite, replacing The King of Kong: A Fistful of Quarters. There was also the usual background hum of quite a lot of things happening, including the latest iterations of various debates. We may or may not be doomed to die, but we are definitely doomed to repeat certain motions quite a few more times, and for people to be rather slow to update. We got some very welcome quiet on the [...] --- Outline: (01:41) Language Models Offer Mundane Utility (03:00) Heads In The Sand (07:05) Huh, Upgrades (08:10) Mythos (12:07) Whats In A Name (14:59) On Your Marks (16:10) Choose Your Fighter (16:53) Get My Agent On The Line (17:31) Deepfaketown and Botpocalypse Soon (24:33) Cyber Lack Of Security (29:08) Fun With Media Generation (29:50) A Young Ladys Illustrated Primer (30:53) They Took Our Jobs (37:45) After They Take Our Jobs (39:16) Gell-Mann Amnesia (41:33) Get Involved (43:25) In Other AI News (46:41) Show Me the Money (51:08) Quiet Speculations (51:59) Explaining Persistent Model Parity (55:37) Take a Moment (01:00:54) OpenAI: The Histories (01:06:04) The Department of AI War (01:12:38) Department of AI Solidarity (01:13:46) Writing For The AIs (01:16:42) Quickly, Theres No Time (01:16:46) The Quest for Sane Regulations (01:18:10) Chip City (01:20:07) You Received The Federal Framework (01:21:02) The Week in Audio (01:24:22) Rhetorical Innovation (01:27:48) I Am The Very Human Of A Frontier Language Model (01:38:01) Aligning a Smarter Than Human Intelligence is Difficult (01:41:22) Aligning Fake Graphs Can Also Be Difficult (01:49:32) The Lighter Side --- First published: April 2nd, 2026 Source: https://www.lesswrong.com/posts/iBeTkFuQwjaRPo3Ad/ai-162-visions-of-mythos --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1hr 50min
  3. 17 HR AGO

    “2026: The year of throwing my agency at my health (now with added cyborgism)” by Ruby

    I have bipolar disorder. I was diagnosed in late 2012 following my one and only severe manic episode. Most psychiatrists would regard me as a resounding success case – I never even remotely come close to suicidal depression, manic delusions of grandeur, impulsive spending, or irresponsible sexual behavior. By standard measures, I am well-adjusted, functional, and successful. Part of this relative success is adherence to appropriate medication, and another part is maintaining good insight[1] into my mental state. Years ago, I defined a personal bipolar index scale to communicate to myself and close ones my mental state. My bipolar index ranges from -10 to +10 and is a subjective self-report. -10 would be a state of extreme suicidal depression. +10 would be extreme mania with complete loss of insight, delusions of grandeur, pressured speech, psychosis, etc. 0 is the perfectly balanced state in the middle, neither up nor down. In the last decade and a half, I don't think I've ever broken out of the -3 to +3 range. -2 to +1 is standard, and more so between -1 to 0.5 most of the time. Really, an extreme success case by typical psychiatric standards. Yet the disease burden [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: April 2nd, 2026 Source: https://www.lesswrong.com/posts/CuTeXRShovP5gDBLy/2026-the-year-of-throwing-my-agency-at-my-health-now-with --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    7 min
  4. 19 HR AGO

    [Linkpost] “Q1 2026 Timelines Update” by Daniel Kokotajlo, elifland, bhalstead

    This is a link post. We’re mostly focused on research and writing for our next big scenario, but we’re also continuing to think about AI timelines and takeoff speeds, monitoring the evidence as it comes in, and adjusting our expectations accordingly. We’re tentatively planning on making quarterly updates to our timelines and takeoff forecasts. Since we published the AI Futures Model 3 months ago, we’ve updated towards shorter timelines. Daniel's Automated Coder (AC) median has moved from late 2029 to mid 2028, and Eli's forecast has moved a similar amount. The AC milestone is the point at which an AGI company would rather lay off all of their human software engineers than stop using AIs for software engineering. The reasons behind this change include:1 We switched to METR Time Horizon version 1.1.We included data from newly evaluated models (Gemini 3, GPT-5.2, and Claude Opus 4.6).Daniel and Eli revised their estimates for the present doubling time of the METR time horizon to be faster, from a 5.5 month median previously to 4 months for Daniel and 4.5 months for Eli. We revised it due to: (a) METR's new v1.1 trend being faster than their previous v1.0, (b) new [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: April 2nd, 2026 Source: https://www.lesswrong.com/posts/XLLjqMxETva3ABtsK/q1-2026-timelines-update Linkpost URL:https://open.substack.com/pub/aifutures1/p/q1-2026-timelines-update --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    6 min
  5. 23 HR AGO

    “How social ideas get corrupt” by Kaj_Sotala

    I’ve noticed that sometimes there is an idea or framework that seems great to me, and I also know plenty of people who use it in a great and sensible way. Then I run into people online who say that “this idea is terrible and people use it in horrible ways”. When I ask why, they point to people applying the idea in ways that do indeed seem terrible - and in fact, applying it in ways that seem to me like the opposite of what the idea is actually saying. Of course, some people might think that I’m the one with the wrong and terrible version of the idea. I’m not making the claim that my interpretation is necessarily always the correct one. But I do think that there's a principle like “every ~social idea[1] acquires a corrupted version”, and that the corruption tends to serve specific purposes rather than being random. Here are a couple of examples: Attachment theory. People with insecure attachment read about attachment theory, and then what they imagine secure attachment looking like is actually an idealized version of their own insecure attachment pattern. Someone with anxious attachment might think that a secure relationship [...] --- Outline: (03:12) Emotionally selective reading (05:42) The effects of vibe and my own corruption-complicity (09:52) Conflicted authors The original text contained 4 footnotes which were omitted from this narration. --- First published: April 2nd, 2026 Source: https://www.lesswrong.com/posts/xezLdonsRbdDj2CAx/how-social-ideas-get-corrupt --- Narrated by TYPE III AUDIO.

    14 min
  6. 1 DAY AGO

    “The Indestructible Future” by WillPetillo

    Doctor: Mr. Burns, I'm afraid you are the sickest man in the United States. You have everything! [...] Burns: You're sure you just haven't made thousands of mistakes? Doctor: Uh, no. No, I'm afraid not. Burns: This sounds like bad news! Doctor: Well, you'd think so, but all of your diseases are in perfect balance, [...] we call it "Three Stooges Syndrome" Burns: So, what you're saying is...I'm indestructible! Doctor: Oh, no, no! In fact, even a slight breeze could— Burns: Indestructible... In the transition to ASI, humanity's survival ended up depending on a global "Three Stooges Syndrome." As demographers predicted, fertility rates collapsed. Aging populations were going to hollow out workforces, crush pension systems, and leave a skeleton crew of exhausted 50-year-olds trying to maintain civilization for an enormous retired population whose lives kept extending one year per year. Then AI automated...everything. The worker shortage met the automation wave head-on, and the babies that weren't born didn't grow up to need jobs that no longer existed. Metabolic disorders, attention fragmentation, and other health effects of increasingly artificial lives were all real. Biomedical AI, however, iteratively compressed the drug development cycle and diagnostic systems such that the treatment curves [...] --- First published: April 1st, 2026 Source: https://www.lesswrong.com/posts/v629JQLgv3r9zhemZ/the-indestructible-future --- Narrated by TYPE III AUDIO.

    8 min
  7. 1 DAY AGO

    “My most common advice for junior researchers” by LawrenceC

    Written quickly as part of the Inkhaven Fellowship. At a high level, research feedback I give to more junior research collaborators often can fall into one of three categories: Doing quick sanity checksSaying precisely what you want to sayAsking why one more time In each case, I think the advice can be taken to an extreme I no longer endorse. Accordingly, I’ve tried to spell out the degree to which you should implement the advice, as well as what “taking it too far” might look like. This piece covers doing quick sanity checks, which is the most common advice I give to junior researchers. I’ll cover the other two pieces of advice in a subsequent piece. Doing quick sanity checks Research is hard (almost by definition) and people are often wrong. Every researcher has wasted countless hours or days if not weeks or months chasing fruitless lines of investigation.[1] Oftentimes, this time could’ve been saved with a few basic sanity checks. Does your idea make sense at all? Does your data have obvious sources of bias (e.g. forms of selection bias) or other issues (e.g. using the wrong prompt)? Does your theorem make [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: April 1st, 2026 Source: https://www.lesswrong.com/posts/dYHFtEnKc4BdJEYY4/my-most-common-advice-for-junior-researchers --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    5 min

About

Audio narrations of LessWrong posts.

You Might Also Like