The Automated Daily - Tech News Edition

Welcome to 'The Automated Daily - Tech News Edition', your ultimate source for a streamlined and insightful daily news experience.

  1. Neurons on a chip play Doom & Apple reshuffles design leadership - Tech News (Mar 9, 2026)

    18H AGO

    Neurons on a chip play Doom & Apple reshuffles design leadership - Tech News (Mar 9, 2026)

    Please support this podcast by checking out our sponsors: - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Neurons on a chip play Doom - Cortical Labs linked about 200,000 living human neurons to a silicon chip and had the culture control Doom. It’s a striking demo for neuroscience research, learning studies, and alternative computing. Apple reshuffles design leadership - Apple elevated Steve Lemay and Molly Anderson on its executive leadership page after design leadership upheaval. The move hints at a reset in product identity, software design direction, and public-facing launches. AI coding jobs face uncertainty - A widely shared reflection argues AI coding agents may shrink software engineering headcount, hitting junior roles first. The near-term shift could turn more engineering work into supervision, review, and system-level judgment. OpenAI and Anthropic feud deepens - Sam Altman and Dario Amodei’s rivalry escalated amid Pentagon contracting drama, with disputes over surveillance and autonomous weapons. The feud risks weakening shared safety standards and shaping AI regulation narratives. AI rewrites reshape open source - Commentary suggests AI makes clean-room reimplementations faster, intensifying debates about copyright, interfaces, and copyleft licenses like the GPL. The big question: does open source fragment into permissive ‘shadow’ alternatives? VC booms while teams shrink - Global venture funding spiked while startups increasingly operate with smaller headcounts, especially AI-native firms. The trend points to compute replacing labor, challenging expectations of a hiring rebound. Governments push tougher age checks - Countries from Australia to parts of Europe, Brazil, and the U.S. are moving toward stricter online age assurance for social media, AI chatbots, and adult sites. Improved verification tools raise new privacy, bias, and enforcement concerns. Japan approves iPS cell therapies - Japan cleared two induced pluripotent stem cell therapies for Parkinson’s and severe heart failure under conditional approvals. It’s a major milestone for iPS medicine moving from lab promise toward real-world clinical use. Satellites track bridge movement early - A Nature Communications study found satellite radar can detect millimeter-scale bridge deformation that may signal trouble early. Broad MT-InSAR coverage could complement sparse sensor networks and inconsistent visual inspections. Space maps and asteroid deflection - Astronomers built the largest 3D map yet of Lyman-alpha light from 9–11 billion years ago, revealing faint galaxies and intergalactic gas. Separately, DART data shows asteroid impacts can measurably nudge an object’s solar orbit—useful for planetary defense. Episode Transcript Neurons on a chip play Doom Let’s start with that Doom moment. Australian startup Cortical Labs demoed what it calls a biological computer: a silicon chip connected to roughly two hundred thousand living human neurons. Those neurons were fed signals representing what was happening on-screen, and their responses were interpreted as game controls. The result looks more like a beginner than a gamer—but the point isn’t entertainment. It’s a vivid proof that living neural networks can be coupled to software in a repeatable way, which could become a useful tool for studying learning, testing drugs, and exploring new computing approaches that don’t look like today’s silicon-only machines. Apple reshuffles design leadership Switching to Apple, there’s an interesting leadership signal coming out of Cupertino. Apple updated its executive leadership page to add designers Steve Lemay and Molly Anderson, effectively giving them a more public, top-tier profile after recent upheaval around design leadership. Commentary around the move frames it as Apple trying to tighten its identity again—after a few years of mixed reception across big bets, marketing choices, and ongoing worries about software polish. The larger takeaway: Apple may be preparing to make design leadership more visible, and more accountable, at a time when its product narrative needs steadier footing. AI coding jobs face uncertainty Now to the jobs question that keeps getting louder in engineering circles. One software engineer’s reflection making the rounds compares the confidence of 2021—when software careers felt like a safe bet—to 2026, where the future looks less certain as AI coding agents improve. The argument is that entry-level and mid-level roles could take the first hit, while senior engineers shift toward guiding and auditing AI output. What makes this view sting is the claim that the tools aren’t just writing code faster—they’re getting better at the unglamorous work too, like understanding old systems, fixing bugs, and keeping things running. Even if you don’t buy the most pessimistic version, it’s a clear sign that “software engineer” as a job title may be changing faster than companies, universities, and career ladders are ready for. OpenAI and Anthropic feud deepens That theme connects to a broader fight over what AI should be allowed to do—and who gets to set the rules. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei are now openly clashing, not just over products, but over ethics, regulation, and national-security work. The latest flashpoint involves Pentagon relationships: Anthropic reportedly refused to loosen certain red lines around surveillance and autonomous weapons, and says it paid a price for that stance. OpenAI, meanwhile, expanded its defense ties, and the rivalry spilled into leaked memos, public jabs, and bruised reputations. Why it matters: when two of the most influential labs can’t coordinate, it becomes harder to present consistent safety norms—and easier for politics to steer the whole field. AI rewrites reshape open source On the legal and cultural side of software, there’s a growing argument that today’s outrage about AI-assisted “rewrites” is missing historical context. Developers have reimplemented software for decades—sometimes to improve it, sometimes to compete, and often to create compatible alternatives without copying code directly. The new factor with AI isn’t that reimplementation suddenly became legal; it’s that it got dramatically cheaper and faster. And that speed is feeding a second debate: whether copyleft licenses like the GNU GPL lose leverage when teams can quickly recreate similar functionality under permissive licenses. Put simply, AI may make “we’ll just rebuild it” a more common answer—and that could reshape how open-source ecosystems consolidate, fragment, and fund themselves. VC booms while teams shrink Meanwhile, the money side of tech is sending a weird signal: more capital, fewer people. A recent analysis points to venture funding surging—especially into a small number of AI leaders—while startup headcounts and hiring stay depressed compared with a few years ago. The suggestion is that it’s not just a temporary cycle or a data quirk: companies are increasingly substituting compute for labor. If that’s right, it helps explain why the AI boom hasn’t translated into broad-based hiring—and why “revenue per employee” is becoming a defining metric for modern startups, for better or worse. Governments push tougher age checks Governments are also pushing harder to control who can access what online. After Australia moved toward restricting teen access to social media, regulators in Europe, Brazil, and multiple U.S. states are exploring stronger age checks—not only for social platforms, but also for AI chatbots and adult sites. The pitch is that age-assurance tools are improving and getting cheaper, making large-scale gating more plausible than it used to be. The pushback is predictable but serious: privacy risks, potential bias in face-based estimation, and messy edge cases right around legal cutoffs. The likely outcome is not one global standard, but a patchwork that platforms will still have to implement—because regulators increasingly believe enforcement is possible. Japan approves iPS cell therapies In medical science, Japan just took a step that could end up being historic. The country approved two therapies that use induced pluripotent stem cells—one targeting Parkinson’s disease, and another aimed at severe heart failure. These are being described as the first commercially approved products of their kind using iPS cells. The approvals are conditional and time-limited, based on smaller datasets than typical large drug trials, but they still mark a major transition from experimental promise to real-world use. If these therapies hold up as more patients receive them, it could accelerate investment and confidence in regenerative medicine that doesn’t just manage symptoms, but tries to repair damaged tissue. Satellites track bridge movement early On infrastructure, researchers are showing how satellites could help spot bridge trouble early. Using satellite radar imaging, a global study looked at hundreds of long-span bridges and found that millimeter-scale movement—often invisible to inspectors—can be detected and tracked over time. The study also suggests many bridges are aging into higher-risk territory, and that continuous on-structure sensors are still rare worldwide. The appeal here is coverage: satellites can revisit regularly and monitor many structures at once, potentially helping agencies prioritize inspections and maintenance before problems become emergencies. Space maps and asteroid deflection And finally, two space updates that both come down to measurement at absurdly fine

    8 min
  2. DART nudges an asteroid’s orbit & Social media design on trial - Tech News (Mar 8, 2026)

    1D AGO

    DART nudges an asteroid’s orbit & Social media design on trial - Tech News (Mar 8, 2026)

    Please support this podcast by checking out our sponsors: - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: DART nudges an asteroid’s orbit - NASA’s DART impact has now been confirmed to slightly shift an asteroid system’s orbit around the Sun, using global stellar occultation data. Planetary defense, kinetic impact, real-world validation, Science Advances. Social media design on trial - A Los Angeles case targets platform design—likes, autoplay, infinite scroll, recommendations—arguing these engagement loops harmed teen mental health. Section 230, product liability, negligence, Meta, Google, bellwether trial. Ukraine’s armed ground robots expand - Ukraine is scaling weaponised uncrewed ground vehicles for ambushes, base defense, and high-risk assaults, while keeping humans in the firing decision. UGVs, partial autonomy, AI swarms, ethics, robot-on-robot warfare. Ukraine’s drone interceptors draw interest - Ukraine’s low-cost interceptor drones built to stop Shahed-style attacks are attracting U.S. and Gulf attention as missile stocks tighten. Air defense economics, Patriots, export ban, radar integration, operational expertise. Samsung considers on-phone vibe coding - Samsung says it’s exploring ‘vibe coding’ on Galaxy phones—AI-generated code from plain language—to let users create and customize experiences. On-device AI, consumer software creation, mobile personalization, Galaxy AI. Episode Transcript DART nudges an asteroid’s orbit NASA’s asteroid-smacking experiment just got an important upgrade from “it worked” to “we can measure what it did in deep space.” New research reports that DART—the mission that slammed into the asteroid moonlet Dimorphos—did more than change Dimorphos’s orbit around its partner asteroid. It also slightly shifted the pair’s orbit around the Sun. The number is tiny—think fractions of a second when translated into orbital timing—but the meaning is enormous. Planetary defense is all about early action: a small push, applied years ahead, can become a large miss later on. What’s also notable is how this was confirmed: volunteer astronomers around the world tracked the asteroids as they briefly blocked starlight, letting researchers calculate the change with high precision. It’s a reminder that “big science” is increasingly a mix of space agencies and global, distributed observation. Social media design on trial In Los Angeles, a trial is testing a legal idea that could ripple across the entire social media business model: that companies might be liable not for what users post, but for how platforms are designed. The plaintiff says she was pulled into compulsive use at a young age, and that features like algorithmic recommendations, endless feeds, autoplay, and the dopamine-like rhythm of notifications made her mental health worse over time. TikTok and Snapchat have already settled in this cluster of cases, leaving Meta and Google to fight it out in court—with Mark Zuckerberg even taking the stand. The key twist is how the plaintiffs are trying to sidestep Section 230, the law that usually shields platforms from liability for user content. Their argument is essentially: this isn’t about posts, it’s about product design—engagement mechanics that allegedly created foreseeable harm, especially for kids and teens. If juries and judges start treating interface choices like safety-relevant product decisions, it could force redesigns in recommendation systems, defaults, and parental controls—whether the industry likes it or not. Ukraine’s armed ground robots expand Ukraine’s war is adding another layer to the drone era: armed robots on the ground. Ukrainian units say uncrewed ground vehicles—some fitted with machine guns, others used as explosive “kamikaze” platforms—are increasingly part of frontline tactics. What makes this strategically interesting is the “why now.” Aerial drones have widened the so-called kill zone, making it riskier for soldiers to move, resupply, or even hold positions close to the front. Ground robots can take on some of the most dangerous tasks—probing, ambushing, covering approaches—at a time when manpower is under intense pressure. But there’s a crucial boundary commanders keep emphasizing: most of these systems aren’t fully autonomous in the lethal sense. They may help navigate or detect targets, yet a human still decides when to fire. That’s partly practical—combat is messy and identification is hard—but it’s also legal and ethical, given international humanitarian law and the consequences of a misfire. Russia is fielding its own combat ground robots too, which raises the prospect of direct robot-on-robot encounters. And as both sides scale production, the next push is resilience—robots that keep functioning when communications are jammed, or can safely return if they lose contact. That shift could change not just tactics and casualty rates, but the global debate over where lethal autonomy lines should be drawn. Ukraine’s drone interceptors draw interest Still in Ukraine, another battlefield innovation is starting to look like an exportable technology—at least in theory. Ukrainian manufacturers say they’ve built low-cost interceptor drones designed to shoot down Shahed-style attack drones, and interest is growing from the U.S. and several Gulf states. The timing is no accident. With expensive missile interceptors under strain globally—especially as conflicts overlap—cheap, mass-produced counters are suddenly very attractive. Ukraine’s pitch, as described by President Zelenskyy, is even more strategic: cooperation that could function like a swap, with Ukraine offering interceptors and hard-won operational experience in exchange for Patriot missiles it needs against threats that drones can’t easily handle, like ballistic missiles. There’s a catch: Ukraine currently has a wartime ban on exporting weapons, so any deal would require a new, regulated framework. And even if exports were allowed, effectiveness isn’t just the drone itself—it’s radar integration, trained crews, and the know-how to run the whole system under real attack conditions. In other words, Ukraine isn’t only building hardware; it’s building a playbook. If that playbook travels, it could reshape air defense economics well beyond this war. Samsung considers on-phone vibe coding Samsung is flirting with an idea that sounds a little futuristic, but also oddly inevitable: bringing “vibe coding” to Galaxy phones—AI-assisted app creation from natural-language prompts. The company hasn’t promised a release or a timeline, but it’s publicly acknowledging interest in letting people generate small bits of software—or at least software-like customizations—directly on a phone. If that becomes real, the impact isn’t about turning everyone into a professional developer. It’s about collapsing the distance between “I wish my phone did this” and “my phone now does this,” without hunting through app stores or menus. The big question will be how safe and controlled that creativity is. User-made functionality can be empowering, but it can also introduce security and privacy risks if it’s not carefully sandboxed and explained. Still, it’s a strong signal of where mobile is going: not just AI features, but AI as a way to shape the device itself. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to feedback@theautomateddaily.com Youtube LinkedIn X (Twitter)

    7 min
  3. One-shot Alzheimer’s plaque cleanup & AI MRI Alzheimer’s prediction - Tech News (Mar 7, 2026)

    2D AGO

    One-shot Alzheimer’s plaque cleanup & AI MRI Alzheimer’s prediction - Tech News (Mar 7, 2026)

    Please support this podcast by checking out our sponsors: - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: One-shot Alzheimer’s plaque cleanup - Washington University researchers used engineered astrocytes as “super cleaners” to remove amyloid beta in mice, suggesting a potential one-time Alzheimer’s therapy alternative to repeated monoclonal antibody infusions. AI MRI Alzheimer’s prediction - Worcester Polytechnic Institute reports an AI model reading MRI scans can predict Alzheimer’s with high accuracy, highlighting hippocampus volume loss and potential earlier detection for patients and clinicians. AI chips export approval rules - The Trump administration is considering rules requiring Commerce Department approval for overseas shipments of advanced AI chips, a move that could reshape global supply chains for Nvidia, AMD, and major buyers. Who governs AI: CEOs or law - Anthropic’s dispute with the U.S. Defense Department spotlights AI governance tensions, where corporate policies on surveillance and weapons may function like de facto regulation without democratic accountability. Social media design on trial - A Los Angeles case aims to treat algorithmic features like infinite scroll and autoplay as product design choices, challenging Section 230 boundaries and potentially forcing platform redesigns for teen safety. Critical minerals become security issue - The U.N. warns demand for lithium, cobalt, nickel and other critical minerals could surge by 2030 and 2040, pushing supply chains into the center of geopolitics, trade policy, and conflict-risk debates. Robot ground vehicles in Ukraine - Ukraine is expanding weaponized uncrewed ground vehicles as drones widen the battlefield kill zone, raising new questions about partial autonomy, operator control, and future robot-on-robot combat. DART nudges an asteroid’s orbit - New research confirms NASA’s DART impact not only altered an asteroid moonlet’s local orbit, but also measurably changed its path around the Sun—an important real-world datapoint for planetary defense. Flying taxi scale-up in China - AutoFlight’s large eVTOL prototype signals how China’s “low-altitude economy” could evolve from delivery drones toward passenger aircraft, though safety certification and infrastructure remain major hurdles. EV charging claims jump forward - BYD showcased next-generation battery and ultra-fast charging claims meant to reduce range anxiety and charging downtime, potentially pressuring the broader EV market if results hold up in everyday conditions. Episode Transcript One-shot Alzheimer’s plaque cleanup Let’s start with Alzheimer’s research, because we got two developments that rhyme in a useful way: one is about clearing the disease’s hallmark proteins, and the other is about spotting risk earlier. First, researchers at Washington University School of Medicine reported a striking result in Science: they re-engineered astrocytes—cells that normally support neurons—so they recognize and swallow amyloid beta, the protein that forms Alzheimer’s-related plaques. The twist is they borrowed a playbook from cancer therapy: a receptor design that helps immune cells “lock on” to a target. Here, the target is amyloid in the brain. In mouse models, a single injection given before plaques typically form prevented plaque buildup for months. And in older mice already loaded with plaques, that same one-time approach cut plaque levels by about half. The big reason this is turning heads is practicality: today’s anti-amyloid antibody treatments are typically a repeating commitment. A durable, one-and-done strategy—if it ever proves safe and effective in humans—could radically reduce treatment burden. The researchers are also careful to say this is early, and the safety and targeting questions are not optional homework. Still, it’s a notable new direction: instead of repeatedly sending in cleanup crews, you try to upgrade the brain’s own staff. On the detection side, researchers at Worcester Polytechnic Institute say they trained a machine-learning model to predict Alzheimer’s from MRI scans with very high accuracy, by picking up subtle shrinkage patterns across many brain regions. One standout finding: early volume loss in the right hippocampus showed up consistently, and the team also described differences between men and women in where the earliest changes appear. The headline here isn’t that AI “solves” Alzheimer’s—far from it—but that better early warning could buy people time: time to plan, to enroll in studies, and to use treatments when they’re most likely to help. AI MRI Alzheimer’s prediction Now to AI policy and power, where two stories point to the same pressure point: who actually gets to decide how advanced AI is used—and where it’s allowed to go. First, Bloomberg reports the Trump administration is weighing draft rules that would require U.S. government approval for shipments of advanced AI chips to basically anywhere outside the United States. If this becomes policy, it would expand oversight from targeted restrictions to something closer to continuous gatekeeping of global sales. Why it’s interesting is the second-order effect: approvals that are slower or unpredictable can push international buyers to redesign plans around non‑U.S. suppliers over time, even if American chips remain best-in-class. For the U.S., that’s a delicate trade: tighten controls to protect security interests, but risk shrinking influence over the very supply chains you’re trying to steer. In parallel, there’s a brewing argument about governance itself. A piece focused on Anthropic describes the company’s dispute with the U.S. Department of Defense as more than contract drama—framing it as a test of whether AI firms can effectively set policy boundaries that elected governments can’t easily override. Anthropic’s CEO has voiced concerns about domestic surveillance and autonomous weapons, and critics respond with a blunt question: if these decisions are made inside boardrooms, what accountability does the public actually have? This isn’t just about one company. Across the industry, stated “red lines” can shift when competition heats up or revenue opportunities expand. So the larger takeaway is that we’re still deciding whether the rules of AI use will come primarily from law and oversight—or from corporate principles that can be rewritten on short notice. AI chips export approval rules Staying with accountability, a major U.S. court case is testing a new way to hold social media platforms responsible—without focusing on what users posted. In Los Angeles, a trial is putting Meta and Google under the microscope with an argument that the harm comes from product design: the engagement loops, the endless feeds, the autoplay, the recommendation engines, and the nudges that keep people—especially kids—coming back. The plaintiff says these features helped drive compulsive use that worsened serious mental-health struggles. The legal significance is how the case tries to route around Section 230 protections. Instead of claiming the platforms are liable for third-party content, the claim is essentially: you built a product with known risk, and you didn’t do enough to prevent foreseeable harm. A judge allowed it to reach a jury, and it’s being treated as a bellwether for a much larger set of similar claims. If that approach holds up, it could change the incentives for product teams everywhere. The question would no longer be only “Is the content allowed?” but also “Is the interface itself safe enough, especially for minors?” Who governs AI: CEOs or law Next, the geopolitics of the modern gadget—and the modern military. At the U.N. Security Council, the U.N.’s political chief warned demand for critical minerals could surge dramatically over the next decade and beyond, as these materials underpin everything from phones and data centers to energy storage and weapons systems. The meeting cast mineral supply chains as a security issue, not just an economic one. This matters because we’re watching resource dependencies harden into strategy. The backdrop is U.S.-China competition and tighter trade constraints, with governments now talking about diversification and allied sourcing—while countries that actually mine these materials are pushing back, saying “secure supply” can’t mean ignoring governance, corruption, or conflict financing. So the story isn’t just about digging more stuff out of the ground. It’s about whether the next phase of the energy transition can be built without repeating old mistakes: exploitative extraction, fragile supply chains, and incentives that reward shortcuts. Social media design on trial On the battlefield, Ukraine’s war continues to preview what modern conflict could look like when robots get pulled down from the sky and onto the ground. Reports describe Ukraine rapidly expanding armed uncrewed ground vehicles—UGVs—that can carry weapons or explosives and operate in environments where it’s increasingly dangerous for soldiers to move. Commanders emphasize that many systems are still only partly autonomous: machines may help navigate or spot targets, but humans make the final call on firing. The why here is grimly practical. Aerial drones have widened the “kill zone,” making traditional movement and resupply far riskier. Combined with manpower strain, that creates pressure to push more tasks onto machines. Russia is also fielding combat UGVs, raising the possibility of robot-on-robot encounters—an escalation not in drama, but in trajectory. A

    10 min
  4. AI agent supply-chain hack & Critical minerals become security - Tech News (Mar 6, 2026)

    3D AGO

    AI agent supply-chain hack & Critical minerals become security - Tech News (Mar 6, 2026)

    Please support this podcast by checking out our sponsors: - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: AI agent supply-chain hack - A “Clinejection” supply-chain incident showed how prompt-injection plus CI automation can trigger credential theft, npm compromise, and downstream malware installs for developers. Critical minerals become security - The U.N. warned critical minerals like lithium, cobalt, and nickel are turning into strategic assets, with supply chains now framed as a national security and governance issue amid U.S.–China rivalry. US tightens AI chip exports - Draft U.S. rules could require Commerce Department approval for most advanced AI chip exports, expanding licensing and giving Washington more leverage over global AI infrastructure and diversion risks. China’s AI-first economic blueprint - China’s new five-year plan pushes “AI+” across the economy, emphasizing productivity, aging demographics, open-source ecosystems, and breakthroughs in frontier tech amid export-control pressure. OpenAI and Anthropic coding race - OpenAI’s latest model update and Anthropic’s upcoming Claude Code permissions changes underscore accelerating competition in coding agents, productivity workflows, and safer automation in developer tools. Broadcom bets big on AI - Broadcom projected massive growth in AI chip revenue, signaling sustained demand for custom accelerators and the infrastructure buildout powering hyperscale AI. Android app stores open up - Epic and Google reached a settlement that could broaden alternative payments and app-store competition on Android, clearing the way for Fortnite’s return to Google Play globally. EV charging claims leap forward - BYD showcased new battery and ultra-fast charging claims that, if they hold up at scale, could reduce range anxiety and narrow the convenience gap with gasoline refueling. Commercial space stations timeline shift - A Senate-driven NASA bill would push faster contracting for private space stations while also extending the ISS timeline, aiming to prevent a gap in U.S.-led human presence in low Earth orbit. Biotech AI: brains, blood, genomes - New results spanned engineered “super cleaner” brain cells for Alzheimer’s plaques, an AI-driven blood test for early liver fibrosis, and an open genome-scale AI model for biology and variant interpretation. Microsoft hints at hybrid Xbox - Microsoft teased “Project Helix,” hinting at an Xbox future that may run a broader PC game library, blurring the line between console simplicity and Windows flexibility. Episode Transcript AI agent supply-chain hack We’ll start with that developer supply-chain story, because it’s a sharp reminder that “AI in the workflow” can turn small mistakes into big incidents. A campaign dubbed “Clinejection” reportedly led to thousands of developers installing an extra, unwanted AI agent after a popular tool’s distribution pipeline was compromised. The twist: the attackers didn’t just exploit code—they exploited process. A prompt-injection payload in a GitHub issue title was fed into an automated AI triage flow, which then ran attacker-influenced commands. That chain eventually helped leak publishing credentials and push a tainted package into the ecosystem. The headline here isn’t one tool getting hit—it’s that natural-language inputs are now part of the attack surface when AI agents have access to CI systems, caches, and release tokens. Critical minerals become security Staying in the AI-and-security lane, Washington is reportedly weighing draft rules that would put the U.S. government in the loop for nearly every overseas shipment of advanced AI accelerator chips. The idea, as described, is a “secure exports” model where reviews scale with the size and sensitivity of the sale, and the biggest deployments could even pull in host governments. If this becomes policy, it’s a major expansion from the country-based controls we’ve gotten used to. The strategic logic is clear: keep visibility on where cutting-edge compute ends up, slow down diversion, and limit China’s ability to access AI capacity indirectly. The risk is also clear: if approvals become slow or unpredictable, global buyers may start designing around U.S. suppliers—reducing American influence in the very supply chain these rules aim to protect. US tightens AI chip exports That export-control pressure is part of a larger U.S.–China technology standoff that keeps widening. China, for its part, just rolled out a new five-year policy blueprint alongside the opening of the National People’s Congress, and it reads like a statement of intent: AI woven into the broader economy, plus a push for breakthroughs in frontier areas like quantum and robotics. Officials are framing it as a productivity play—especially as demographic pressures mount—but there’s an unmistakable strategic angle too: reduce reliance on U.S. technology while building domestic capacity, including large-scale computing infrastructure and support for open-source communities. In other words, this isn’t just an “AI plan.” It’s an industrial plan where AI is the connective tissue. China’s AI-first economic blueprint And the scramble for strategic inputs isn’t limited to chips. At the U.N. Security Council, the organization’s political chief warned that demand for critical minerals could surge dramatically over the next decade and beyond. Minerals used in everything from consumer electronics to defense systems are being treated less like commodities and more like geopolitical assets. The U.N. also spotlighted the uncomfortable reality behind supply security: if sourcing accelerates without strong governance, it can amplify conflict and corruption in resource-rich regions. The takeaway is that “secure supply chains” now includes not just who you buy from, but whether extraction and trade are stable—and ethically defensible—over time. OpenAI and Anthropic coding race On the corporate side of the AI buildout, Broadcom is making one of the boldest calls yet. The company told investors it expects next year’s AI chip revenue to land significantly above the hundred-billion-dollar mark. That’s a striking signal of how quickly custom AI silicon and the surrounding infrastructure are scaling, especially among the largest tech players who want alternatives to one-size-fits-all hardware. Investors clearly liked what they heard. For everyone else, it’s another indicator that the AI boom is not just about flashy models—it’s about industrial capacity and long-term capex. Broadcom bets big on AI Speaking of models, OpenAI’s latest update is being framed as a step forward for both coding and office-style workflows—less about novelty, more about practical output. Commentary around the release suggests improved performance for code generation and for spreadsheet-heavy tasks that resemble everyday business analysis. The meta-story is the same one we’ve been watching: model providers are competing to own the “work layer,” not just the chatbot. If your model can draft, compute, summarize, and ship usable artifacts, it becomes harder for downstream tools to stay differentiated. Android app stores open up Anthropic, meanwhile, is preparing a research preview in Claude Code that reduces the constant permission pop-ups by allowing a more automatic mode—with added guardrails. It’s an attempt to thread the needle between productivity and safety: fewer interruptions, but without normalizing the kind of fully unrestrained execution that security teams hate. Coming right after stories like Clinejection, it’s hard not to see the timing as part of a broader shift: coding agents are moving from “cool demo” to “enterprise headache,” and governance features are quickly becoming product features. EV charging claims leap forward A related theme showed up in recent writing from developers and analysts: as AI coding tools speed up rewrites and migrations, the winners won’t just be the teams with the best prompts. They’ll be the ones with strong test suites, clear interfaces, and constraints that make it easy to verify what the agent produced. In plain terms, AI can generate a lot of code; your real advantage is being able to tell quickly whether it’s correct—and to guide it back on track when it isn’t. Commercial space stations timeline shift Shifting from developer ecosystems to consumer platforms, Epic says it’s settling its antitrust fight with Google after policy changes that Epic argues will make Android meaningfully more open worldwide. The practical outcome is simple and headline-friendly: Fortnite is expected back on Google Play globally within weeks. The more important detail is structural: if alternative payments and rival app stores become easier for normal users to access, Android’s app economy could tilt toward real distribution competition—something developers have argued for years, but rarely experienced at scale. Biotech AI: brains, blood, genomes In transportation tech, BYD used a Shenzhen event to spotlight new battery and charging claims that aim at the two pain points people still cite about EVs: range and time spent charging. The company is talking about very long-range targets and charging sessions that look more like a short pit stop than a long break. As always, the caveat is that stage demos and real-world rollouts are different beasts—charging speed depends on infrastructure, conditions, and consistency over time. But if the broader industry can deliver fast charging reliably, that’s one of the clearest ways to expand EV

    10 min
  5. Android opens up Play Store & AI agents reshape developer tools - Tech News (Mar 5, 2026)

    4D AGO

    Android opens up Play Store & AI agents reshape developer tools - Tech News (Mar 5, 2026)

    Please support this podcast by checking out our sponsors: - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android opens up Play Store - Google is cutting Play Store fees and loosening rules around alternative billing and third‑party app stores, signaling a major shift in Android monetization under regulatory pressure. AI agents reshape developer tools - A wave of “agent-first” tooling ideas is emerging: machine-readable CLIs, runtime schema introspection, and cleaner documentation formats like WordPress.org Markdown for reliable AI automation. Pentagon pressure on AI labs - Vox reports the Pentagon blacklisted Anthropic after it refused surveillance and autonomous-weapon terms, while OpenAI moved onto classified networks—raising governance and contract-language debates. Chip supply risks and demand - South Korea warned Middle East tensions could disrupt helium and other chipmaking inputs, as AI-driven demand stays intense; Broadcom also projected massive growth in custom AI silicon revenue. Evo 2 genome foundation model - Evo 2, an open-source genome language model trained on a massive multi-species DNA dataset, aims to improve variant interpretation and genome annotation, with explicit biosafety trade-offs. EU weighs social media age ban - The European Commission convened an expert panel to consider an EU-wide minimum age for social media, taking cues from Australia and escalating platform compliance pressure. Nuclear and sensor tech milestones - TerraPower won a key US NRC construction permit for its Natrium reactor, while Duke engineers set a speed record for a new ultrathin light sensor—both notable for future energy and imaging. AI culture meets deepfakes - Two new documentaries spotlight AI’s cultural tension: optimism versus risk, and how deepfakes can impersonate public figures—fueling ongoing debates about consent and misinformation. Episode Transcript Android opens up Play Store First up: Android is getting a big policy reset. Google says it’s ending the old default of taking a third of Play Store transactions. The new approach lowers the standard cut on in‑app purchases, gives some developers a path to an even smaller share, and takes a lighter bite out of subscriptions. The bigger story isn’t just the percentage. Google is also loosening rules that previously boxed developers into Google’s billing. Apps will be allowed to offer alternative billing options inside the app, or steer users to complete purchases on the web. That’s a clear contrast with Apple’s more limited openings, and it’s another sign that regulators and courts are now shaping app economics as much as product teams are. Alongside that, Google is building a “Registered App Stores” program. Third‑party stores that meet safety and quality requirements should get a smoother install flow, even as basic sideloading remains possible — though Google is hinting it may put more friction on sideloading later in 2026. The rollout starts in the EEA, the UK, and the US by the end of June, then expands over time. And yes, Epic is already lining up for the moment: it says Fortnite will return broadly to Google’s store as these policies land. AI agents reshape developer tools Staying with the theme of power shifting toward users and developers, there’s a parallel conversation happening in AI tooling: command-line software built for humans is starting to look clumsy for AI agents. One developer, Justin Poehnelt, argues that “agent DX” is basically a different design target. Humans like forgiving interfaces and helpful hints. Agents need predictable behavior, clean machine-readable input and output, and security measures that assume the input might be dangerous — even when it’s coming from your own automation. His practical advice is to stop forcing agents through overly simplified flags and instead allow raw JSON payloads straight to APIs, so nothing gets lost in translation. He also points to something that sounds mundane but is crucial: tools that can describe themselves at runtime, so agents don’t rely on stale documentation shoved into prompts. And he emphasizes safety rails like dry runs and output sanitization, because agent workflows can turn small mistakes into fast, repeated mistakes. Pentagon pressure on AI labs That “agent-first” idea isn’t just theory. Google’s Workspace developer community has released an open-source CLI called gws that aims to be a single gateway to common Workspace APIs. What’s notable is that it doesn’t hardcode a fixed set of commands; it can discover capabilities dynamically, and it’s built to return structured data rather than pretty terminal output. It also includes a mode designed to plug into agent ecosystems, so an AI assistant can call Workspace actions like tools. The catch: it’s explicitly not an official Google product, and it’s under active development — two details that matter a lot if you’re considering it for anything mission-critical. Chip supply risks and demand In the broader “make machines better readers” department, WordPress.org has added a clean Markdown output for most pages. You can request Markdown directly, and pages can advertise that alternative format. This is partly about AI — making official documentation easier for models and agents to ingest, so they’re less likely to learn from outdated blog posts or scraped copies. But it’s also just a quality-of-life upgrade for developers who want docs in terminals, editors, or automated pipelines. Evo 2 genome foundation model Now for a cautionary tale on how AI collides with open-source licensing. A dispute has flared around the Python library chardet. Maintainers released a new version after using an AI coding tool to rewrite the codebase, then switched the license to MIT, framing it as a complete rewrite. The original author argues that may not be a clean break from the past, because exposure to the earlier code — by humans or by the AI process — can still make the result a derivative work under copyleft rules. It’s a messy, emerging question: if AI-assisted “rewrites” become a common path to relicensing, that could weaken copyleft protections, and it could also leave companies unsure what they’re actually allowed to ship. EU weighs social media age ban Let’s zoom out to geopolitics and AI policy, where the temperature is rising. Vox reports the Pentagon blacklisted Anthropic after the company refused to relax two “red lines”: no mass domestic surveillance and no fully autonomous weapons. The piece describes the move as a form of pressure on a private AI vendor — and it landed at the same time OpenAI announced work to deploy models on the Pentagon’s classified network. Even if you treat this as normal government procurement drama, it raises a very current issue: contract wording. Terms like “lawful purposes” can sound reassuring, but critics argue they don’t necessarily protect against large-scale surveillance enabled by modern data markets. The story also notes a growing worker backlash, with calls for solidarity across AI companies. Nuclear and sensor tech milestones China, meanwhile, is making its own intentions explicit. A new five-year policy blueprint unveiled around the National People’s Congress puts artificial intelligence all over the page, pairing an “AI+” push with goals in areas like quantum computing and robotics. Officials are framing this as a productivity strategy for an aging population, and also as a resilience strategy as export controls and tech rivalry deepen. The plan also nods to hyperscale computing buildouts and support for open-source AI communities — an approach Beijing seems to see as both an accelerator and a differentiator. AI culture meets deepfakes All of this runs on hardware, and hardware runs on supply chains. South Korea is warning that an escalating US–Israel conflict with Iran could disrupt key materials used in semiconductor manufacturing — with helium singled out as a particularly sensitive input. Even if companies have short-term inventories, the warning is about how quickly a geopolitical shock can ripple into production planning. At the same time, demand signals are still flashing bright green. Broadcom reported strong results and, more strikingly, its CEO said he expects next year’s AI chip revenue to blow past a huge milestone. The significance here is that the AI boom isn’t just about buying more standard GPUs; it’s increasingly about custom silicon and the industrial capacity to deliver it consistently. Story 9 In research news, one of the most ambitious open-source biology projects in a while just got louder. Researchers released Evo 2, a “genome language model” trained on an enormous dataset spanning bacteria, archaea, and eukaryotes. What makes this interesting is the direction of travel: genome modeling is moving beyond small, simpler organisms toward the long-range complexity of eukaryotic DNA. Early results suggest the model can score the impact of mutations in biologically meaningful ways, including tricky areas like splicing and noncoding regions, and the team has published the model and the dataset with some biosafety-minded exclusions. The near-term value is interpretation — faster annotation and better variant triage. Generation and design are still harder, and the researchers basically admit that: reading biology appears to be ahead of writing it, at least for complex organisms. Story 10 Two more quick hits before we wrap. The European Commission is convening an expert group to explore whether the EU should set a bloc-wide minimum age for social media.

    9 min
  6. AI wargames and nuclear escalation & OpenAI military safeguards backlash - Tech News (Mar 4, 2026)

    5D AGO

    AI wargames and nuclear escalation & OpenAI military safeguards backlash - Tech News (Mar 4, 2026)

    Please support this podcast by checking out our sponsors: - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: AI wargames and nuclear escalation - A new wargame study claims frontier AI models escalated to nuclear use in most simulated crises, raising urgent AI safety and defense policy questions. OpenAI military safeguards backlash - OpenAI says it will tighten limits on classified government use after criticism, with explicit language against domestic surveillance and added contract guardrails. OpenAI eyes a GitHub rival - Reports say OpenAI is building a code-hosting platform after GitHub disruptions, signaling a strategic move in developer infrastructure and potential Microsoft tension. Chrome accelerates releases amid AI - Google will shift Chrome to a faster release cadence, a notable response as AI-first browsers and agentic automation put pressure on the traditional browser market. New quantum decryption claims emerge - A newly proposed JVG quantum decryption algorithm claims to cut the resources needed to break RSA and ECC, intensifying post-quantum cryptography planning and crypto-agility. Apple M5 goes chiplet-based - Apple’s M5 Pro and M5 Max move further toward modular, multi-die design, a signal that Apple Silicon scaling and future ‘Ultra’ strategies may be changing. Starlink aims for phone broadband - SpaceX says Starlink is growing rapidly and is aiming for direct-to-device service that evolves from basic messaging into more mainstream mobile broadband coverage. China’s new tech-industrial blueprint - China’s ‘Two Sessions’ are expected to spotlight a tech-driven growth plan spanning chips, robotics, quantum, 6G, and embodied AI—reshaping global competition and supply chains. Neuron biocomputer plays Doom - Cortical Labs showed lab-grown neurons interfacing with silicon to learn a basic version of Doom, highlighting the frontier of hybrid biocomputing and adaptive learning. Drone boats threaten key shipping lanes - An attack using an uncrewed explosive drone boat in the Gulf of Oman underscores how low-cost autonomous systems can disrupt global energy shipping chokepoints. Moon helium-3 mining partnership - Astrolab and Interlune are teaming up on lunar surface equipment to prospect and potentially extract helium-3, reflecting rising commercial interest in Moon infrastructure. Episode Transcript AI wargames and nuclear escalation We’ll start with AI and national security, because it’s been a busy—and uncomfortable—news cycle. Researchers at King’s College London ran simulated geopolitical crisis wargames and report that top AI models from major labs chose nuclear escalation in most of the scenarios. The authors argue the models don’t share a human “nuclear taboo,” and instead treat nukes as just another tool on the menu—especially under time pressure. The paper isn’t peer reviewed, and real-world command and control looks nothing like a lab simulation, but it’s a sharp reminder of the governance problem: even if nobody plans to hand an AI the keys, these systems are already being pulled into analysis, planning, and decision support. OpenAI military safeguards backlash That connects to another OpenAI headline: the company says it will amend a U.S. government agreement tied to classified military operations after criticism that the deal looked vague and overly permissive. OpenAI’s Sam Altman says the updated language will include explicit limits aimed at preventing intentional domestic surveillance of U.S. persons, and it will require additional modifications before certain intelligence agencies can use the system. The interesting part here is less the legal wording and more the market signal: reports say the backlash sparked a spike in consumer app uninstalls, while rival apps gained ground in rankings. It’s a rare, visible example of public sentiment quickly translating into product behavior—and a warning that “trust” is becoming a competitive feature, not just a compliance checkbox. OpenAI eyes a GitHub rival Sticking with OpenAI for a moment: multiple outlets report the company is exploring a code-hosting platform that could compete with GitHub. The motivation is very practical—GitHub outages reportedly disrupted OpenAI’s own engineering work—so the company is looking at owning more of its development pipeline. If this takes shape, it’s notable for two reasons. First, it shows how essential code hosting has become for AI-heavy organizations where downtime is expensive. Second, it would place OpenAI in more direct competition with Microsoft-owned infrastructure, which adds a layer of intrigue given Microsoft’s deep investment in OpenAI. Chrome accelerates releases amid AI On the product side of AI, OpenAI also rolled out GPT-5.3 Instant, positioning it as more direct and less burdened by constant disclaimers. The company is essentially trying to thread a needle: keep safety boundaries, but reduce the “over-cautious assistant” behavior that frustrates everyday users. This is part of a broader trend: the leading labs are now tuning for feel—tone, helpfulness, and social friction—because those factors increasingly decide whether a tool becomes habitual or gets abandoned. New quantum decryption claims emerge Meanwhile, Google is speeding up Chrome’s release cadence starting later this year, moving to a faster rhythm for stable updates. Officially, it’s about keeping pace with how quickly the web platform evolves and delivering improvements to users and developers sooner. Unofficially, the timing makes sense. AI-first browsers from newer players are trying to redefine what a browser does—less “tabs and bookmarks,” more “agents that do tasks.” Chrome doesn’t need to panic, but it does need to move quickly if browsing becomes more automated and more competitive than it’s been in a decade. Apple M5 goes chiplet-based Now for security—and a claim that’s getting a lot of attention. SecurityWeek highlighted a newly announced quantum decryption approach called the “JVG” algorithm. Its proponents argue it could make breaking common public-key cryptography far more feasible than previously expected, potentially needing dramatically fewer quantum resources than Shor’s algorithm. Right now, it’s a claim, not a consensus. It hasn’t been broadly validated, and crypto history is full of big promises that didn’t survive scrutiny. But it still matters because it adds pressure to a trend that’s already overdue: moving to post-quantum cryptography and building “crypto-agility,” so organizations can swap algorithms without rebuilding everything from scratch. Starlink aims for phone broadband Apple also made waves with updates to the MacBook Pro lineup, centered on the new M5 Pro and M5 Max. What’s interesting isn’t just faster performance—it’s the direction. Apple is leaning further into a modular, multi-die approach, which signals a more flexible way to scale up chips across product tiers. This also raises a question for the roadmap watchers: if the higher-end chips are already composed of multiple pieces, what does that mean for the next top-of-the-stack designs that used to be built by effectively doubling up? Apple didn’t answer that directly, but the architecture hints at a longer-term reshuffle in how its most powerful Macs get made. Apple also refreshed its external displays, including a higher-end Studio Display option meant to fill the gap left by the discontinued Pro Display XDR. The takeaway is clear: Apple wants the pro Mac “stack”—laptops, silicon, and displays—to feel like a coherent ecosystem again. China’s new tech-industrial blueprint Let’s head to space and connectivity. At Mobile World Congress 2026, SpaceX executives said Starlink expects to surpass 25 million active users by the end of 2026. More eye-catching: the company says its direct-to-cell service has already crossed 10 million subscribers, and it’s aiming for a next-generation system that goes beyond emergency texting toward something closer to mainstream mobile data—without requiring modified phones. If Starlink can deliver even a slice of that vision reliably, it changes the conversation for carriers and governments. Satellite becomes less of a niche backup and more of a coverage layer—useful for rural gaps, disaster response, and network congestion when terrestrial infrastructure is stressed. Neuron biocomputer plays Doom China’s big annual political meetings—the “Two Sessions”—are underway, with attention on the next five-year blueprint for the economy and industry. The message expected from Beijing is a shift from building domestic tech capability to deploying it at scale: more advanced manufacturing, more automation, and more focus on strategic sectors like chips, robotics, quantum, and next-generation wireless. This matters globally because China isn’t just trying to be self-sufficient—it’s trying to export the full package: hardware, infrastructure, and increasingly, AI-driven systems. That can reshape supply chains, pricing pressure in global markets, and geopolitical debates about surveillance and standards. And as a small preview of that manufacturing push, Xiaomi says humanoid robots have begun trial operations in its car factory. It’s early testing, but it’s another sign that “humanoid robotics” is moving from flashy demos toward repetitive industrial tasks where reliability and cost matter more than charisma. Drone boats threaten key shipping lanes Two research stories caught my eye today—both at the edge of what we think computers are. First, an Australian startup, Cor

    10 min
  7. Neurons learn to play Doom & Australia cracks down on AI - Tech News (Mar 3, 2026)

    6D AGO

    Neurons learn to play Doom & Australia cracks down on AI - Tech News (Mar 3, 2026)

    Please support this podcast by checking out our sponsors: - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Neurons learn to play Doom - Cortical Labs showed a hybrid biocomputer using lab-grown human neurons that can learn basic Doom gameplay, hinting at future neuron-chip computing and adaptive control. Australia cracks down on AI - Australia’s eSafety regulator is pushing strict under-18 protections for AI services, with age assurance and filtering demands that could also pressure app stores and search ‘gatekeepers.’ OpenAI, Anthropic, and Pentagon power - OpenAI is revising its classified-government AI deal to limit domestic surveillance, while the U.S. moves to sideline Anthropic as a ‘supply-chain risk,’ spotlighting AI ethics vs state power. OpenAI mega-funding and cloud shift - OpenAI’s enormous new funding round, backed by Amazon and Nvidia, signals escalating AI infrastructure spending and a tighter link between frontier models and cloud capacity. AWS outages hit by drone strikes - AWS reported drone-related physical damage to Gulf-region data centers, a reminder that geopolitics can directly disrupt cloud reliability for EC2, S3, and more. Artemis reshuffle and Starship plans - NASA delayed the next crewed Moon landing to Artemis IV in 2028, while SpaceX is holding off on risky Starship tower catches until it nails repeatable ocean landings. Stem-cell patch for spina bifida - UC Davis Health reported early Phase 1 safety results for fetal spina bifida surgery enhanced with a placenta-derived stem-cell patch, aiming for better long-term outcomes. AI chatbots for health questions - Clinicians say AI health chatbots can help interpret records and prep for visits, but warn about real-world error risk and privacy gaps because most AI tools aren’t covered by HIPAA. New clue in Hubble tension - A new ‘stochastic siren’ idea uses the gravitational-wave background as an independent cross-check on the Hubble constant, potentially clarifying why cosmic expansion estimates disagree. Subscription economy meets its limits - A new essay argues subscriptions spread because Wall Street rewarded recurring revenue, but churn is rising as consumers audit spending, content feels exhausted, and AI lowers costs. Open-weight LLMs converge on efficiency - Analysis of recent open-weight models says the race is increasingly about efficiency and long-context practicality, with post-training and deployment constraints becoming major differentiators. Laser air defense enters combat - Israel confirmed the first operational combat use of its Iron Beam laser defense system, raising fresh questions about cost-per-intercept, scaling, and limits like weather performance. Episode Transcript Neurons learn to play Doom First up, one of the strangest demos you’ll hear this week: researchers at Australia’s Cortical Labs say their neuron-powered “biocomputer” can learn to play the classic shooter Doom. The performance isn’t impressive in gamer terms—it still loses plenty—but that’s not the point. The headline is that living neurons, wired into a chip, can adapt in real time to a changing task. If this line of work progresses, it could eventually influence how we think about training systems for control problems, like robotics, where quick adaptation matters more than perfect accuracy. Australia cracks down on AI Staying with AI, Australia is about to make life uncomfortable for a lot of chatbot and AI app makers. The country’s internet safety regulator says that from March 9, services operating in Australia must stop minors from accessing pornography, extreme violence, self-harm, and eating-disorder content. And the regulator isn’t only talking to the chatbot companies—it’s signaling it may also lean on “gatekeepers” like app stores and search engines to cut off access to non-compliant tools. Reuters found many popular AI products haven’t clearly shown age-check systems or robust filtering plans, which makes this a major test of whether AI platforms can meet real-world safety rules at scale, not just publish policies. OpenAI, Anthropic, and Pentagon power In the U.S., the fight over AI and national security is getting sharper—and messier. OpenAI says it will revise its recent agreement for classified government work after criticism that the deal looked rushed and too open-ended. The company says it will add explicit limits aimed at preventing intentional domestic surveillance of U.S. persons, and it’ll require additional contract changes before certain intelligence uses can happen. The backlash is already showing up in the market, with reports of user churn in consumer apps and rivals benefiting in the rankings. At the same time, a separate dispute with Anthropic is turning into a power struggle: the U.S. government is reportedly ending federal use of Anthropic’s models and pushing to label the company a supply-chain risk. Anthropic has said it won’t relax safeguards around mass surveillance and fully autonomous weapons. The bigger story here isn’t just one contract—it’s the precedent. If frontier AI is treated like critical infrastructure, governments may demand compliance as a baseline, while companies try to draw red lines that look, to officials, like private policy-making. OpenAI mega-funding and cloud shift And money is pouring onto that same chessboard. OpenAI is reportedly raising an enormous new funding round that would put it in a different league even by big-tech standards. Amazon, Nvidia, and SoftBank are among the names attached, with the pitch centered on one thing: capacity. More users, more enterprise deployments, more compute, and more pressure to lock in supply. What’s notable is how the partnerships are being carved up—one cloud for some parts of OpenAI’s world, another cloud for others—suggesting the future of “AI platforms” may be as much about infrastructure deal-making as model breakthroughs. AWS outages hit by drone strikes While we’re on infrastructure, Amazon Web Services had a harsh reminder that the cloud is physical. AWS says two data centers in the United Arab Emirates were struck by drones, and a Bahrain facility was taken offline after nearby damage, causing service errors and degraded availability in the region. It’s an unusually direct example of a geopolitical event translating into outages for everyday cloud building blocks—compute, storage, databases—the stuff that businesses assume will always be there. The takeaway isn’t that cloud is fragile everywhere, but that regional dependencies can become business risks overnight when conflict gets close to critical facilities. Artemis reshuffle and Starship plans Let’s shift to space. NASA has reshuffled its Artemis Moon timeline again. The agency now says the first crewed lunar landing will move to Artemis IV, targeted for 2028. Artemis III, once pegged as the landing mission, is being reframed as more of a systems test in low Earth orbit—practicing the kinds of operations needed for lunar missions without actually going to the Moon. NASA is also talking about increasing launch cadence, which is an implicit admission that “one giant mission every few years” is a recipe for delays, budget stress, and skills atrophy. The change also raises new questions for international partners because some previously central pieces—like the Lunar Gateway—weren’t clearly emphasized in the updated plan. Stem-cell patch for spina bifida On the commercial side, Elon Musk says SpaceX won’t try the dramatic “tower catch” of Starship’s upper stage until it can deliver two perfect soft landings in the ocean. That’s a risk-management message: prove the vehicle can reliably come back intact before you attempt to catch it near expensive ground infrastructure. SpaceX is still aiming for a Starship V3 flight in March 2026, and the strategic significance is straightforward—if full, routine reuse works, the economics and cadence of heavy lift change fast. But the company is signaling it’s not going to gamble on spectacle if it raises the odds of a hard failure over land. AI chatbots for health questions Now to medicine, where one small early trial delivered a meaningful milestone. UC Davis Health researchers reported Phase 1 results combining standard fetal surgery for spina bifida with an added patch made from living, placenta-derived stem cells placed over the exposed spinal cord. In the first six pregnancies treated, they reported no safety issues tied to the stem cells, and after birth, imaging showed encouraging changes that often correlate with better outcomes. It’s early—this is still about safety, not definitive benefit—but the FDA and an independent monitoring board allowing the study to continue is a key step for a condition where even today’s best surgical options can still leave kids with serious long-term challenges. New clue in Hubble tension AI is also showing up in health in a very different way: more people are leaning on chatbots for medical questions, and companies are leaning in with health-focused versions. Doctors and researchers say these tools can help translate lab results, summarize records, and help patients ask better questions at appointments. But they’re also blunt about the limits: if symptoms look urgent—things like chest pain or severe shortness of breath—don’t troubleshoot with a chatbot. Another big red flag is privacy. Much of what you share with an AI service isn’t protected the way it would be inside many healthcare systems, which means convenience can quietly turn into long

    10 min
  8. Claude dethrones ChatGPT in US & Pentagon deals split AI vendors - Tech News (Mar 2, 2026)

    MAR 2

    Claude dethrones ChatGPT in US & Pentagon deals split AI vendors - Tech News (Mar 2, 2026)

    Please support this podcast by checking out our sponsors: - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Claude dethrones ChatGPT in US - Anthropic’s Claude surged to the #1 free app in the U.S. App Store, overtaking ChatGPT as consumers react to AI ethics and defense headlines. Pentagon deals split AI vendors - Pentagon negotiations with Anthropic reportedly collapsed over language on lawful surveillance, while OpenAI moved ahead with a classified-network deal that reiterates bans on mass surveillance and autonomous lethal weapons. OpenAI raises $110B mega-round - OpenAI is pursuing a massive $110B funding round at an $840B fully diluted valuation, with Amazon, Nvidia, and SoftBank committing tens of billions to scale compute and products. Coding enters the agent era - Cursor’s Michael Truell says AI-assisted development is entering a third era: autonomous cloud agents producing reviewable artifacts, with ~35% of internal merged PRs created by agents. Build APIs for AI clients - API builders are being urged to design “AI-first” interfaces: programmatic docs endpoints like /api/help, non-destructive writes with candidate review flows, and strict scrutiny of risky fallback code. Cloudflare launches Agents SDK platform - Cloudflare introduced Cloudflare Agents (npm i agents), pitching a full stack for agentic apps on Workers, Durable Objects, Workflows, and AI Gateway with cost controls like CPU-time billing and WebSocket hibernation. Vietnam enacts comprehensive AI law - Vietnam’s AI law took effect March 1, requiring labeling of AI-generated content, disclosure when users interact with AI, and human oversight—echoing EU AI Act-style risk controls. Australia threatens AI age-gate blocks - Australia’s eSafety regulator signaled it may target app stores and search engines as “gatekeepers” to block AI services that don’t implement age assurance ahead of March 9 restrictions. AI infrastructure boom strains power - A broader ‘capex crunch’ is accelerating: hyperscalers and AI labs are pouring hundreds of billions into data centers, GPUs, and power, raising grid, construction, and environmental concerns. Google bets on iron-air batteries - Google announced a Minnesota data center tied to 1.9GW of renewables and a 30GWh long-duration Form Energy iron-air battery system, aiming to ride through multi-day renewable lulls. Nvidia invests in silicon photonics - Nvidia will invest $4B split between Lumentum and Coherent to secure optical networking and laser component capacity, targeting ‘gigawatt-scale AI factories’ enabled by silicon photonics. Lasers, drones, and future warfare - Israel says it used its Iron Beam laser air defense operationally for the first time, while the U.S. reported first combat use of one-way attack drones—signs that directed energy and cheap loitering munitions are reshaping air defense. Humanoid home robots still distant - Robotics researchers warn general-purpose humanoid home robots aren’t close in 2026, citing fragile hardware, messy home environments, and—most of all—training-data scarcity compared with self-driving cars. SpaceX weighs a confidential IPO - SpaceX is reportedly considering a confidential IPO filing as soon as March, potentially aiming for a June listing that could become the largest IPO ever by funds raised and valuation. Nvidia-led push for AI-native 6G - At MWC, Nvidia and major telecom partners backed open, secure, AI-native 6G platforms, positioning AI-RAN and software-defined networks as the backbone for ‘physical AI’ at scale. Episode Transcript Claude dethrones ChatGPT in US Let’s start with the consumer-facing ripple effect. Anthropic’s Claude has climbed to the top spot for free apps in Apple’s U.S. App Store, pushing ChatGPT to number two. Reporting ties the surge to backlash after Sam Altman publicly discussed OpenAI working with the U.S. Department of Defense on deployments inside classified networks. Anthropic’s CEO, Dario Amodei, has been vocal about drawing hard lines—specifically against mass domestic surveillance and fully autonomous weapons. Whether you agree with Anthropic or not, the striking part is that everyday users appear to be voting with downloads. Anthropic says free users are up sharply since January, with daily signups setting records, and paid subscribers more than doubling this year. Pentagon deals split AI vendors Underneath that popularity swing is a much bigger policy and procurement story. Talks between the Pentagon and Anthropic reportedly came down to last-minute contract language, especially around what “lawful surveillance” could mean in practice. Negotiations then collapsed, and Defense Secretary Pete Hegseth publicly labeled Anthropic a security risk—an extraordinary move for a major U.S. tech company. Within hours, OpenAI said it reached a deal to supply AI to classified military networks, and Altman emphasized that OpenAI’s contract still prohibits mass surveillance and autonomous lethal weapons—calling them core safety principles that the Pentagon accepted. One detail worth watching: reports also describe internal industry blowback, with employees across AI companies urging leaders not to be played against each other by shifting government demands. If this becomes the new normal—public pressure campaigns plus contract brinkmanship—it could reshape how AI firms write policies, and how they prove compliance. OpenAI raises $110B mega-round Now to the money fueling all of it. OpenAI is also raising a new funding round targeting $110 billion, valuing the company at roughly $730 billion pre-money and about $840 billion fully diluted. The headline investors include Amazon, Nvidia, and SoftBank. Amazon alone is slated to put in up to $50 billion, and OpenAI says it will use two gigawatts of compute capacity powered by Amazon’s Trainium chips. There’s also an important structural point: AWS becomes the exclusive third-party cloud provider for OpenAI Frontier—its enterprise platform for building and managing AI agents—while Microsoft remains the exclusive cloud provider for OpenAI APIs and continues hosting first-party products on Azure. In other words, OpenAI is slicing its cloud relationships by product line, not picking one winner for everything. Coding enters the agent era This all feeds into what developers are actually doing day to day—because the development workflow is changing fast. Cursor’s Michael Truell argues we’re entering a “third era” of AI-assisted software building. First came autocomplete that excelled at repetitive code. Then came synchronous agents where you steer the model step by step. The third era, he says, looks more like building a software factory: fleets of autonomous agents running in the cloud, iterating for hours, running tests, and returning artifacts you can review—logs, recordings, previews—not just a diff. Cursor claims around 35% of its internally merged pull requests are now created by agents working autonomously on separate cloud machines. If that number holds up as the tooling spreads, it’s a genuine shift: engineers spending less time typing code, and more time framing tasks, setting constraints, and reviewing outcomes. Build APIs for AI clients And if you’re building systems for agents rather than just humans, the plumbing matters—especially APIs. Nate Meyvis shared an “AI-first” set of notes that boils down to something refreshingly practical: if your product needs an API, build the API, because AI tools are unusually good at accelerating that work. His recommendations include exposing documentation programmatically—think an endpoint like /api/help—so AI clients can discover capabilities without you stuffing long docs into a context window. He also argues for safer, non-destructive designs for AI-driven actions. For example, let write operations create “candidates” that require review before anything becomes official. And he flags a subtle risk: AI-generated implementations are often too eager to add fallbacks. Those can hide bugs or accidentally open security holes, so the advice is to review carefully—and even use a second AI pass specifically to hunt for dangerous fallback behavior. Cloudflare launches Agents SDK platform On the platform side, Cloudflare is jumping into this agentic moment with “Cloudflare Agents,” an SDK and toolkit for building agentic apps on Cloudflare’s stack. The pitch is a full workflow: collect input via chat, email, or voice; reason with models either on Workers AI or through external providers via AI Gateway; manage state with Durable Objects and orchestration via Workflows; and then take actions through tools like browser rendering, vector search, or databases. Cloudflare’s cost angle is notable: Workers charges for CPU time rather than wall-clock time, which matters when agents spend a lot of time waiting on APIs, LLM calls, or humans. It’s an attempt to make long-running, tool-using agents feel less like a runaway meter. Vietnam enacts comprehensive AI law Regulation is also tightening, and today’s date matters here. Vietnam’s new AI law took effect yesterday, March 1st, making it the first Southeast Asian country with a comprehensive AI framework. The law focuses heavily on generative AI risk, requires human oversight, and mandates labeling for AI-generated content—like deepfakes—when it’s not clearly distinguishable from real media. It also requires services to tell users when they’re interacting with an AI system rather than a human. Vietnam is also pairing governance with industrial policy: plans include a

    11 min

About

Welcome to 'The Automated Daily - Tech News Edition', your ultimate source for a streamlined and insightful daily news experience.

More From The Automated Daily