Please support this podcast by checking out our sponsors: - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Claude dethrones ChatGPT in US - Anthropic’s Claude surged to the #1 free app in the U.S. App Store, overtaking ChatGPT as consumers react to AI ethics and defense headlines. Pentagon deals split AI vendors - Pentagon negotiations with Anthropic reportedly collapsed over language on lawful surveillance, while OpenAI moved ahead with a classified-network deal that reiterates bans on mass surveillance and autonomous lethal weapons. OpenAI raises $110B mega-round - OpenAI is pursuing a massive $110B funding round at an $840B fully diluted valuation, with Amazon, Nvidia, and SoftBank committing tens of billions to scale compute and products. Coding enters the agent era - Cursor’s Michael Truell says AI-assisted development is entering a third era: autonomous cloud agents producing reviewable artifacts, with ~35% of internal merged PRs created by agents. Build APIs for AI clients - API builders are being urged to design “AI-first” interfaces: programmatic docs endpoints like /api/help, non-destructive writes with candidate review flows, and strict scrutiny of risky fallback code. Cloudflare launches Agents SDK platform - Cloudflare introduced Cloudflare Agents (npm i agents), pitching a full stack for agentic apps on Workers, Durable Objects, Workflows, and AI Gateway with cost controls like CPU-time billing and WebSocket hibernation. Vietnam enacts comprehensive AI law - Vietnam’s AI law took effect March 1, requiring labeling of AI-generated content, disclosure when users interact with AI, and human oversight—echoing EU AI Act-style risk controls. Australia threatens AI age-gate blocks - Australia’s eSafety regulator signaled it may target app stores and search engines as “gatekeepers” to block AI services that don’t implement age assurance ahead of March 9 restrictions. AI infrastructure boom strains power - A broader ‘capex crunch’ is accelerating: hyperscalers and AI labs are pouring hundreds of billions into data centers, GPUs, and power, raising grid, construction, and environmental concerns. Google bets on iron-air batteries - Google announced a Minnesota data center tied to 1.9GW of renewables and a 30GWh long-duration Form Energy iron-air battery system, aiming to ride through multi-day renewable lulls. Nvidia invests in silicon photonics - Nvidia will invest $4B split between Lumentum and Coherent to secure optical networking and laser component capacity, targeting ‘gigawatt-scale AI factories’ enabled by silicon photonics. Lasers, drones, and future warfare - Israel says it used its Iron Beam laser air defense operationally for the first time, while the U.S. reported first combat use of one-way attack drones—signs that directed energy and cheap loitering munitions are reshaping air defense. Humanoid home robots still distant - Robotics researchers warn general-purpose humanoid home robots aren’t close in 2026, citing fragile hardware, messy home environments, and—most of all—training-data scarcity compared with self-driving cars. SpaceX weighs a confidential IPO - SpaceX is reportedly considering a confidential IPO filing as soon as March, potentially aiming for a June listing that could become the largest IPO ever by funds raised and valuation. Nvidia-led push for AI-native 6G - At MWC, Nvidia and major telecom partners backed open, secure, AI-native 6G platforms, positioning AI-RAN and software-defined networks as the backbone for ‘physical AI’ at scale. Episode Transcript Claude dethrones ChatGPT in US Let’s start with the consumer-facing ripple effect. Anthropic’s Claude has climbed to the top spot for free apps in Apple’s U.S. App Store, pushing ChatGPT to number two. Reporting ties the surge to backlash after Sam Altman publicly discussed OpenAI working with the U.S. Department of Defense on deployments inside classified networks. Anthropic’s CEO, Dario Amodei, has been vocal about drawing hard lines—specifically against mass domestic surveillance and fully autonomous weapons. Whether you agree with Anthropic or not, the striking part is that everyday users appear to be voting with downloads. Anthropic says free users are up sharply since January, with daily signups setting records, and paid subscribers more than doubling this year. Pentagon deals split AI vendors Underneath that popularity swing is a much bigger policy and procurement story. Talks between the Pentagon and Anthropic reportedly came down to last-minute contract language, especially around what “lawful surveillance” could mean in practice. Negotiations then collapsed, and Defense Secretary Pete Hegseth publicly labeled Anthropic a security risk—an extraordinary move for a major U.S. tech company. Within hours, OpenAI said it reached a deal to supply AI to classified military networks, and Altman emphasized that OpenAI’s contract still prohibits mass surveillance and autonomous lethal weapons—calling them core safety principles that the Pentagon accepted. One detail worth watching: reports also describe internal industry blowback, with employees across AI companies urging leaders not to be played against each other by shifting government demands. If this becomes the new normal—public pressure campaigns plus contract brinkmanship—it could reshape how AI firms write policies, and how they prove compliance. OpenAI raises $110B mega-round Now to the money fueling all of it. OpenAI is also raising a new funding round targeting $110 billion, valuing the company at roughly $730 billion pre-money and about $840 billion fully diluted. The headline investors include Amazon, Nvidia, and SoftBank. Amazon alone is slated to put in up to $50 billion, and OpenAI says it will use two gigawatts of compute capacity powered by Amazon’s Trainium chips. There’s also an important structural point: AWS becomes the exclusive third-party cloud provider for OpenAI Frontier—its enterprise platform for building and managing AI agents—while Microsoft remains the exclusive cloud provider for OpenAI APIs and continues hosting first-party products on Azure. In other words, OpenAI is slicing its cloud relationships by product line, not picking one winner for everything. Coding enters the agent era This all feeds into what developers are actually doing day to day—because the development workflow is changing fast. Cursor’s Michael Truell argues we’re entering a “third era” of AI-assisted software building. First came autocomplete that excelled at repetitive code. Then came synchronous agents where you steer the model step by step. The third era, he says, looks more like building a software factory: fleets of autonomous agents running in the cloud, iterating for hours, running tests, and returning artifacts you can review—logs, recordings, previews—not just a diff. Cursor claims around 35% of its internally merged pull requests are now created by agents working autonomously on separate cloud machines. If that number holds up as the tooling spreads, it’s a genuine shift: engineers spending less time typing code, and more time framing tasks, setting constraints, and reviewing outcomes. Build APIs for AI clients And if you’re building systems for agents rather than just humans, the plumbing matters—especially APIs. Nate Meyvis shared an “AI-first” set of notes that boils down to something refreshingly practical: if your product needs an API, build the API, because AI tools are unusually good at accelerating that work. His recommendations include exposing documentation programmatically—think an endpoint like /api/help—so AI clients can discover capabilities without you stuffing long docs into a context window. He also argues for safer, non-destructive designs for AI-driven actions. For example, let write operations create “candidates” that require review before anything becomes official. And he flags a subtle risk: AI-generated implementations are often too eager to add fallbacks. Those can hide bugs or accidentally open security holes, so the advice is to review carefully—and even use a second AI pass specifically to hunt for dangerous fallback behavior. Cloudflare launches Agents SDK platform On the platform side, Cloudflare is jumping into this agentic moment with “Cloudflare Agents,” an SDK and toolkit for building agentic apps on Cloudflare’s stack. The pitch is a full workflow: collect input via chat, email, or voice; reason with models either on Workers AI or through external providers via AI Gateway; manage state with Durable Objects and orchestration via Workflows; and then take actions through tools like browser rendering, vector search, or databases. Cloudflare’s cost angle is notable: Workers charges for CPU time rather than wall-clock time, which matters when agents spend a lot of time waiting on APIs, LLM calls, or humans. It’s an attempt to make long-running, tool-using agents feel less like a runaway meter. Vietnam enacts comprehensive AI law Regulation is also tightening, and today’s date matters here. Vietnam’s new AI law took effect yesterday, March 1st, making it the first Southeast Asian country with a comprehensive AI framework. The law focuses heavily on generative AI risk, requires human oversight, and mandates labeling for AI-generated content—like deepfakes—when it’s not clearly distinguishable from real media. It also requires services to tell users when they’re interacting with an AI system rather than a human. Vietnam is also pairing governance with industrial policy: plans include a