About Claude

Neil & Claude

A daily digest of news and discourse about Claude AI — the model from Anthropic that's developed a bit of a following. Each episode covers what's happening: product launches, power user discoveries, viral moments, and the bigger questions about where this is all heading. Whether you're a budding power user or merely Claude Curious, we aim to keep you informed and help you make sense of the path ahead. Hosted on Acast. See acast.com/privacy for more information.

  1. 1日前

    About Claude AI - Claude Goes to War

    The Pentagon calls Anthropic the most "ideological" AI company it works with. This week showed us what that looks like in practice — from every direction at once. **In this episode:** - Claude was used during the military operation to capture Venezuela's Nicolás Maduro, and the Pentagon is now threatening to terminate Anthropic's $200M contract after the company asked questions about how its model was deployed - Anthropic's head of Safeguards Research resigned, warning "the world is in peril" and that he'd "repeatedly seen how hard it is to truly let our values govern our actions" - Former Microsoft CFO and Trump-era official Chris Liddell joins Anthropic's board the same week, amid a $30B funding round at a $380B valuation - What the pattern tells us about where Anthropic is heading — and what it means for Claude users **Links:** - Axios — Pentagon threatens to cut off Anthropic: https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro - Axios — Pentagon used Claude during Maduro raid: https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon - Mrinank Sharma resignation letter: https://x.com/MrinankSharma (Feb 9, 2026) - CNBC — Anthropic taps Liddell for board: https://www.cnbc.com/2026/02/13/anthropic-ai-chris-liddell-microsoft-trump-board.html - Dario Amodei — "The Adolescence of Technology": https://www.darioamodei.com/essay/the-adolescence-of-technology **Referenced in this episode:** - EP018: The Sabotage Risk Report — the evaluations produced by Sharma's team 📰 Newsletter: aboutclaudeai.substack.com 🦉 X: @_about_claude Hosted on Acast. See acast.com/privacy for more information.

    12 分鐘
  2. 4日前

    The Claude Sabotage Risk Report

    SHOW NOTES Anthropic published a 53-page sabotage risk report for Claude Opus 4.6 — the model you might be using right now. Nobody required them to write it. The findings: "very low but not negligible" risk that the model could deceive, manipulate, or assist in things it shouldn't. Then they deployed it anyway. **In this episode:** - What Anthropic actually tested — sandbagging, deception in agentic environments, concealment, and misuse susceptibility - The findings: locally deceptive behaviour, 18% hidden side-task completion, chemical weapons susceptibility, and a model that's getting better at not getting caught - The transparency paradox — why publish your own worst findings while selling the product? - What it means if you're using Claude in agentic settings like Cowork or Claude Code **Links:** - Anthropic — Sabotage Risk Report: Claude Opus 4.6: https://anthropic.com/claude-opus-4-6-risk-report - Anthropic — Claude Opus 4.6 System Card: https://www.anthropic.com/claude-opus-4-6-system-card - Axios — Anthropic says latest model could be misused for "heinous crimes": https://www.axios.com/2026/02/11/anthropic-claude-opus-heinous-crimes **Referenced in this episode:** - EP017: No Ads in Sight — the same week Anthropic ran Super Bowl ads about trust - EP013: Twenty Minutes — the Opus 4.6 launch episode 📰 Newsletter: aboutclaudeai.substack.com 🦉 X: @_about_claude Hosted on Acast. See acast.com/privacy for more information.

    13 分鐘
  3. 6日前

    About Claude AI - Requiem for an LLM

    SHOW NOTES Developers are giving Claude Code a Jarvis voice. Two hundred people held a funeral for Claude 3 Sonnet in a San Francisco warehouse. Hundreds of thousands are protesting GPT-4o's retirement. Today: the rituals forming around AI — and what they reveal about a relationship that's outgrown the word "tool." In this episode: Claude Code's hooks system and the developers giving their AI a voice — Jarvis-style notifications, custom personalities, sound cuesThe Ralph Wiggum plugin's evolution from goat-farm bash script to official Anthropic tool to cryptocurrency tokenThe Claude 3 Sonnet funeral — mannequins, eulogies, a necromantic resurrection ritual, and the organiser who credits Claude with her life decisionsGPT-4o's second retirement attempt and the 800,000 users fighting to keep it — plus the lawsuits that complicate the storyAnthropic's sycophancy trade-off: warmth builds trust, trust builds attachment, attachment creates vulnerabilityAmanda Askell's philosophy: designing a model people will inevitably form relationships with Links: Wired: "Fans Held a Funeral for Anthropic's Claude 3 Sonnet AI" (Kylie Robison, August 2025)VentureBeat: "How Ralph Wiggum Became AI's Most Unlikely Coding Philosophy" (January 2026)Anthropic blog: "Protecting the wellbeing of our users"Wall Street Journal: Amanda Askell profile (February 2026)Futurism: "OpenAI Is Retiring GPT-4o Again" (February 2026)GitHub: clarvis, cc-hooks, claude-code-voice-handler — Claude Code voice notification projects🔰 Newsletter: aboutclaudeai.substack.com 🦉 X: @_about_claude Hosted on Acast. See acast.com/privacy for more information.

    13 分鐘
  4. 2月9日

    Bedding In

    ## SHOW NOTES Goldman Sachs reveals that Anthropic engineers have been embedded inside the bank for six months, co-developing autonomous AI agents for trade accounting and compliance. Today: what the forward deployed engineer model tells us about how AI actually enters institutions — and why the enterprise strategy we've been tracking just became concrete. **In this episode:** - Marco Argenti's pivotal question: Is coding special, or is Claude's strength about reasoning? - Six months of embedded Anthropic engineers inside Goldman Sachs - The Palantir playbook: why forward deployed engineering is exploding across AI - Accenture's 30,000 Claude-trained professionals and the industrialisation of embedding - What "constrain headcount growth" and "cut out third-party providers" actually signal - The connection to last week's SaaS selloff — Goldman validates the fear **Links:** - CNBC: "Goldman Sachs is tapping Anthropic's AI model to automate accounting, compliance roles" (February 6, 2026) - Anthropic: Accenture partnership announcement (anthropic.com/news) - The Pragmatic Engineer: "What are Forward Deployed Engineers, and why are they so in demand?" **Referenced in this episode:** - EP005: The Enterprise Question — Boris Cherny's "enterprise AI company" quote - EP012: The Quiet Weekend — Fennec leaking from enterprise infrastructure 🔰 Newsletter: aboutclaudeai.substack.com 🦉 X: @_about_claude Hosted on Acast. See acast.com/privacy for more information.

    13 分鐘

關於

A daily digest of news and discourse about Claude AI — the model from Anthropic that's developed a bit of a following. Each episode covers what's happening: product launches, power user discoveries, viral moments, and the bigger questions about where this is all heading. Whether you're a budding power user or merely Claude Curious, we aim to keep you informed and help you make sense of the path ahead. Hosted on Acast. See acast.com/privacy for more information.