LessWrong posts by zvi

zvi

Audio narrations of LessWrong posts by zvi

  1. HACE 10 H

    “Secretary of War Tweets That Anthropic is Now a Supply Chain Risk” by Zvi

    This is the long version of what happened so far. I will strive for shorter ones later, when I have the time to write them. Most of you should read the first two sections, then choose the remaining sections that are relevant to your interests. But first, seriously, read Dean Ball's post Clawed. Do that first. I will not quote too extensively from it, because I am telling all of you to read it. Now. You’re not allowed to keep reading this or anything else until after you do. I’m not kidding. That's out of the way? Good. Let's get started. What Happened President Trump enacted a perfectly reasonable solution to the situation with Anthropic and the Department of War. He cancelled the Anthropic contract with a six month wind down period, after which the Federal Government would be told not to use Anthropic software. Everyone thought the worst was now over. The situation was unfortunate for Anthropic and also for national security, but this gave us six months to transition, it gave us six months to negotiate another solution, and it avoided any of the extreme highly damaging options that Secretary of [...] --- Outline: (00:49) What Happened (10:02) The Timeline Of Events (21:39) I Did Not Have Time To Write You A Short One (22:40) The Unhinged Declaration of the Secretary of War (25:08) Altman Has Been Excellent On The Question of Supply Chain Risk, But May Need To Do More (27:35) Arrogance Here Means Insisting On Meaningful Red Lines On Mass Domestic Surveillance and Lethal Autonomous Weapons (29:35) Not Doing Business Is Totally Fine (30:22) The Demand For Unrestricted Access Is New And Is Selective And Fake (32:47) Claims Of Strongarming Are Ad Hominem Bad Faith Obvious Nonsense (34:55) Hegseth Equates Not Being a Dictator With Companies Having Veto Power Over Operational Military Decisions (40:54) The Part That If Enacted Would Be A Historically Epic Clusterfuck (48:54) The Other Part Of The Clusterfuck (52:49) The Department of War Had Many Excellent Options (55:53) And Then Theres Emil Michael (01:02:42) Anthropic Will Probably Survive (01:05:14) The Goal of DoW Was Largely Mass Domestic Surveillance (01:15:32) What Are The Key Differences Between The Two Contracts? (01:22:54) OpenAIs Contract Terms (01:27:04) What OpenAIs Contract Terms Actually Do (01:29:16) OpenAI Is Trusting DoW And Sam Altman Misrepresented This (01:33:13) OpenAI Accepted Terms Anthropic Explicitly Declined And That Would Not Have Protected Anthropics Red Lines (01:35:26) How Altman Initially Described His Deal (01:42:26) OpenAI Allowed All Lawful Use And Trusts DoW On This (01:47:21) The DoW Could Alter This Deal (01:49:29) Why OpenAIs Shared Legal Language Offers Almost No Protections (01:59:11) So How Does OpenAI Hope For This To Work Out? (02:01:56) This Was Never About Money (02:05:03) OpenAI Tells Us How They Really Feel (02:06:45) First The Good News (02:11:40) The OpenAI Redlines Only Forbid Currently Illegal Activity (02:16:28) Altman Does Not Present As Understanding The Difference In Redlines (02:18:46) Meeting Of The Minds (02:21:21) Anthropics Position Was The Opposite Of How This Is Portrayed (02:21:56) The Room Where It Happened (02:25:45) You Dont Have The Right (02:32:25) I Ask Questions And Get Answers (02:37:05) Does This Contract Apply To NSA? (02:38:07) Can OpenAI Models Be Used To Analyze Commercially Available Data At Scale? (02:48:21) Employee Activism --- First published: March 2nd, 2026 Source: https://www.lesswrong.com/posts/Wpdivf3iNJDzBcbzJ/secretary-of-war-tweets-that-anthropic-is-now-a-supply-chain --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    2 h y 53 min
  2. HACE 3 DÍAS

    “Anthropic and the DoW: Anthropic Responds” by Zvi

    The Department of War gave Anthropic until 5:01pm on Friday the 27th to either give the Pentagon ‘unfettered access’ to Claude for ‘all lawful uses,’ or else. With the ‘or else’ being not the sensible ‘okay we will cancel the contract then’ but also expanding to either being designated a supply chain risk or having the government invoke the Defense Production Act. It is perfectly legitimate for the Department of War to decide that it does not wish to continue on Anthropic's terms, and that it will terminate the contract. There is no reason things need be taken further than that. Undersecretary of State Jeremy Lewin: This isn’t about Anthropic or the specific conditions at issue. It's about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment. Timothy B. Lee: OK, so don’t renew their contract. Why are you threatening to go nuclear by declaring them [...] --- Outline: (08:00) Good News: We Can Keep Talking (10:31) Once Again No You Do Not Need To Call Dario For Permission (15:22) The Pentagon Reiterates Its Demands And Threats (16:48) The Pentagons Dual Threats Are Contradictory and Incoherent (18:27) The Pentagons Position Has Unfortunate Implications (20:25) OpenAI Stands With Anthropic (22:48) xAI Stands On Unreliable Ground (25:25) Replacing Anthropic Would At Least Take Months (26:02) We Will Not Be Divided (27:50) This Risks Driving Other Companies Away (30:32) Other Reasons For Concern (32:10) Wisdom From A Retired General (35:06) Congress Urges Restraint (37:05) Reaction Is Overwhelmingly With Anthropic On This (40:52) Some Even More Highly Unhelpful Rhetoric (47:23) Other Summaries and Notes (48:32) Paths Forward --- First published: February 27th, 2026 Source: https://www.lesswrong.com/posts/ppj7v4sSCbJjLye3D/anthropic-and-the-dow-anthropic-responds --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    50 min
  3. HACE 4 DÍAS

    “AI #157: Burn the Boats” by Zvi

    Events continue to be fast and furious. This was the first actually stressful week of the year. That was mostly due to issues around Anthropic and the Department of War. This is the big event the news is not picking up, with the Pentagon on the verge of invoking one of two extreme options that would both be extremely damaging to national security and that would potentially endanger our Republic. The post has details, and the first section here has a few additional notes. Also stressful for many was the impact of Citrini's AI scenario, where it is 2028 and AI agents are sufficiently capable to disrupt the whole economy but this turns out to be bearish for stocks. People freaked out enough about this that it seems to have directly impacted the stock market, although most stocks other than the credit card companies seem to have bounced back. Of course, in a scenario like that we probably all die and definitely the world transforms, and you have bigger things to worry about than the stock market, but the post does raise a lot of very good detailed points, so I spend my post going over [...] --- Outline: (02:34) Anthropic and the Department of War (06:06) Language Models Offer Mundane Utility (06:39) Language Models Dont Offer Mundane Utility (08:23) Huh, Upgrades (08:43) On Your Marks (15:22) Choose Your Fighter (15:32) Deepfaketown and Botpocalypse Soon (16:58) Head In The Sand (17:58) Fun With Media Generation (19:19) A Young Ladys Illustrated Primer (19:46) You Drive Me Crazy (20:43) They Took Our Jobs (25:42) The Art of the Jailbreak (26:43) Get Involved (28:02) Introducing (31:49) In Other AI News (36:10) The India Summit (46:01) Show Me the Money (48:07) Quiet Speculations (49:25) The Quest for Sane Regulations (54:59) Chip City (56:11) The Mask Comes Off (58:19) The Week in Audio (01:07:27) Quickly, Theres No Time (01:07:59) Dean Ball On Recursive Self-Improvement (01:13:28) Rhetorical Innovation (01:18:23) Aligning a Smarter Than Human Intelligence is Difficult (01:20:23) The Homework Assignment Is To Choose The Assignment (01:35:34) Agent Foundations (01:36:54) Autonomous Killer Robots (01:37:36) People Really Hate AI (01:39:50) People Are Worried About AI Killing Everyone (01:42:00) Other People Are Not As Worried About AI Killing Everyone (01:42:59) The Lighter Side (01:47:24) If I streamed Slay the Spire 2, would you watch? --- First published: February 26th, 2026 Source: https://www.lesswrong.com/posts/zC3Rtrj6RXwEde9h6/ai-157-burn-the-boats --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h y 48 min
  4. HACE 5 DÍAS

    “Anthropic and the Department of War” by Zvi

    The situation in AI in 2026 is crazy. The confrontation between Anthropic and Secretary of War Pete Hegseth is a new level of crazy. It risks turning quite bad for all. There's also nothing stopped it from turning out fine for everyone. By at least one report the recent meeting between the two parties was cordial and all business, but Anthropic has been given a deadline of 5pm eastern on Friday to modify its existing agreed-upon contract to grant ‘unfettered access’ to Claude, or else. Anthropic has been the most enthusiastic supporter our military has in AI and in tech, but on this point have strongly signaled they with this they cannot comply. Prediction markets find it highly unlikely Anthropic will comply (14%), and think it is highly possible Anthropic will either be declared a Supply Chain Risk (16%) or be subjected to the Defense Production Act (23%). I’ve hesitated to write about this because I could make the situation worse. There's already been too many instances in AI of warnings leading directly to the thing someone is warning about, by making people aware of that possibility, increasing its salience or creating negative polarization and solidifying [...] --- Outline: (01:32) This Standoff Should Never Have Happened (06:07) Anthropic Cannot Fold (07:12) Dean Ball Gives a Primer (10:57) What Happened To Lead To This Showdown? (18:05) Simple Solution: Delayed Contract Termination (18:59) Better Solution: Status Quo (19:29) Extreme Option One: Supply Chain Risk (25:56) Putting Some Misconceptions To Bed (28:16) Extreme Option Two: The Defense Production Act (41:23) These Two Threats Contradict Each Other (42:40) The Pentagons Actions Here Are Deeply Unpopular (45:45) The Pentagons Most Extreme Potential Asks Could End The Republic (48:07) Anthropic Did Make Some Political Mistakes (49:13) Claude Is The Best Model Available (50:55) The Administration Until Now Has Been Strong On This (51:50) You Should See The Other Guys (53:16) Some Other Intuition Pumps That Might Be Helpful (53:55) Trying To Get An AI That Obeys All Orders Risks Emergent Misalignment (01:00:13) We Can All Still Win --- First published: February 25th, 2026 Source: https://www.lesswrong.com/posts/rmYB4a7Pskw7DLpCh/anthropic-and-the-department-of-war --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h y 1 min
  5. HACE 6 DÍAS

    “Citrini’s Scenario Is A Great But Deeply Flawed Thought Experiment” by Zvi

    A viral essay from Citrini about how AI bullishness could be bearish was impactful enough for Bloomberg to give it partial responsibility for a decline in the stock market, and all the cool economics types are talking about it. So fine, let's talk. It's an excellent work of speculative fiction, in that it: Depicts a concrete scenario with lots of details and numbers. Introduces a bunch of underexplored and important mechanisms. Gets a lot of those mechanisms more right than you would expect. Provides lots of food for thought. Takes bold stands. Is clearly labeled as ‘a scenario, not a prediction’ up at the top. Is fun to read and doesn’t let reality get in the way of exploring its ideas. The Efficient Market Hypothesis is false, whoo! Citrini: Hopefully, reading this leaves you more prepared for potential left tail risks as AI makes the economy increasingly weird. It is still a work of speculative fiction. It doesn’t let reality get in the way of its ideas. I appreciate Tor Bair's perspective of this being a case of Cunningham's Law, that the best [...] --- Outline: (03:17) The Headline Destination (05:36) ...And Thats Terrible (08:59) SaaSpocalype Now (09:48) Levels of Friction Go To Zero ...And Thats Terrible (11:51) DoorDash and Uber (13:36) Breaking Into The Marketplace (14:24) Who Captures The Surplus? (15:09) For Everything Else (20:55) Real Estate Realism (23:14) Everything Is Awesome And No One Is Happy (23:49) Bearish For Non-AI Stocks Is Reasonable (25:47) Other Levels of Friction Problems (26:45) Oh Over There? Thats The Singularity (28:12) Have I Got A Job For You (29:51) Systemic Failure (32:31) What Me Worry (About Economics)? (37:14) We The People (39:29) The Efficient Market Hypothesis Is False --- First published: February 24th, 2026 Source: https://www.lesswrong.com/posts/bKrpLhqcoN6WycrFp/citrini-s-scenario-is-a-great-but-deeply-flawed-thought --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    42 min
  6. 23 FEB

    “Claude Sonnet 4.6 Gives You Flexibility” by Zvi

    Anthropic first gave us Claude Opus 4.6, then followed up with Claude Sonnet 4.6. For most purposes Sonnet 4.6 is not as capable as Opus 4.6, but it is not that far behind, it would have been fully frontier-level a few months ago, and it is faster and cheaper than Opus. That has its advantages, including that Sonnet is in the free plan, and it seems outright superior for computer use. Anthropic: Claude Sonnet 4.6 is available now on all plans, Cowork, Claude Code, our API, and all major cloud platforms. We’ve also upgraded our free tier to Sonnet 4.6 by default—it now includes file creation, connectors, skills, and compaction. Claude Sonnet 4.6 is our most capable Sonnet model yet. It's a full upgrade of the model's skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. JB: I use it all the time because I’m poor. This substantially upgrades Claude's free tier for coding and computer use. It gives us all a better lightweight option, including for sub-agents where you would have previously needed to use Haiku. [...] --- Outline: (01:53) On Your Marks (10:09) Reactions: Its How Much You Save (18:56) Bringing It Together --- First published: February 23rd, 2026 Source: https://www.lesswrong.com/posts/u2vFY4wefyqPwwDH8/claude-sonnet-4-6-gives-you-flexibility --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    20 min
  7. 20 FEB

    “AI #155: Welcome to Recursive Self-Improvement” by Zvi

    This was the week of Claude Opus 4.6, and also of ChatGPT-5.3-Codex. Both leading models got substantial upgrades, although OpenAI's is confined to Codex. Once again, the frontier of AI got more advanced, especially for agentic coding but also for everything else. I spent the week so far covering Opus, with two posts devoted to the extensive model card, and then one giving benchmarks, reactions, capabilities and a synthesis, which functions as the central review. We also got GLM-5, Seedance 2.0, Claude fast mode, an app for Codex and much more. Claude fast mode means you can pay a premium to get faster replies from Opus 4.6. It's very much not cheap, but it can be worth every penny. More on that in the next agentic coding update. One of the most frustrating things about AI is the constant goalpost moving, both in terms of capability and safety. People say ‘oh [X] would be a huge deal but is a crazy sci-fi concept’ or ‘[Y] will never happen’ or ‘surely we would not be so stupid as to [Z]’ and then [X], [Y] and [Z] all happen and everyone shrugs as if nothing happened and [...] --- Outline: (02:32) Language Models Offer Mundane Utility (03:17) Language Models Dont Offer Mundane Utility (03:33) Huh, Upgrades (04:22) On Your Marks (06:23) Overcoming Bias (07:20) Choose Your Fighter (08:44) Get My Agent On The Line (12:03) AI Conversations Are Not Privileged (12:54) Fun With Media Generation (13:59) The Superb Owl (22:07) A Word From The Torment Nexus (26:33) They Took Our Jobs (35:36) The Art of the Jailbreak (35:48) Introducing (37:28) In Other AI News (42:01) Show Me the Money (43:05) Bubble, Bubble, Toil and Trouble (53:38) Future Shock (56:06) Memory Lane (57:09) Keep The Mask On Or Youre Fired (58:35) Quiet Speculations (01:03:42) The Quest for Sane Regulations (01:06:09) Chip City (01:09:46) The Week in Audio (01:10:06) Constitutional Conversation (01:11:00) Rhetorical Innovation (01:19:26) Working On It Anyway (01:22:17) The Thin Red Line (01:23:35) Aligning a Smarter Than Human Intelligence is Difficult (01:30:42) People Will Hand Over Power To The AIs (01:31:50) People Are Worried About AI Killing Everyone (01:32:40) Famous Last Words (01:40:15) Other People Are Not As Worried About AI Killing Everyone (01:42:41) The Lighter Side --- First published: February 12th, 2026 Source: https://www.lesswrong.com/posts/cytxHuLc8oHRq7sNE/ai-155-welcome-to-recursive-self-improvement --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h y 48 min
  8. 20 FEB

    “AI #156 Part 2: Errors in Rhetoric” by Zvi

    Things that are being pushed into the future right now: Gemini 3.1 Pro and Gemini DeepThink V2. Claude Sonnet 4.6. Grok 4.20. Updates on Agentic Coding. Disagreement between Anthropic and the Department of War. We are officially a bit behind and will have to catch up next week. Even without all that, we have a second highly full plate today. Table of Contents (As a reminder: bold are my top picks, italics means highly skippable) Levels of Friction. Marginal costs of arguing are going down. The Art Of The Jailbreak. UK AISI finds a universal method. The Quest for Sane Regulations. Some relatively good proposals. People Really Hate AI. Alas, it is mostly for the wrong reasons. A Very Bad Paper. Nick Bostrom writes a highly disappointing paper. Rhetorical Innovation. The worst possible plan is the best one on the table. The Most Forbidden Technique. No, stop, come back. Everyone Is Or Should Be Confused About Morality. New levels of ‘can you?’ Aligning a Smarter Than Human Intelligence is Difficult. Seeking a good basin. [...] --- Outline: (00:43) Levels of Friction (04:55) The Art Of The Jailbreak (06:16) The Quest for Sane Regulations (12:09) People Really Hate AI (18:22) A Very Bad Paper (25:21) Rhetorical Innovation (32:35) The Most Forbidden Technique (34:10) Everyone Is Or Should Be Confused About Morality (36:07) Aligning a Smarter Than Human Intelligence is Difficult (44:51) Well Just Call It Something Else (47:18) Vulnerable World Hypothesis (51:37) Autonomous Killer Robots (53:18) People Will Hand Over Power To The AIs (57:04) People Are Worried About AI Killing Everyone (59:29) Other People Are Not Worried About AI Killing Everyone (01:00:56) The Lighter Side --- First published: February 20th, 2026 Source: https://www.lesswrong.com/posts/obqmuRxwFyy8ziPrB/ai-156-part-2-errors-in-rhetoric --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 h y 4 min

Calificaciones y reseñas

5
de 5
2 calificaciones

Acerca de

Audio narrations of LessWrong posts by zvi

También te podría interesar