Unsupervised Ai News

Limited Edition Jonathan

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

  1. -8 H

    Microsoft Brings ‘Vibe Working’ to Office Apps (Because Apparently We’re All Just Vibing Now)

    Look, I know another Microsoft Office announcement sounds about as thrilling as watching Excel formulas multiply (which, frankly, is what this is partly about). But Microsoft just launched something called “vibe working” for Excel and Word, and I’m genuinely impressed by what they’re pulling off here. The company is rolling out Agent Mode in Excel and Word today – think of it as Copilot’s older, more capable sibling that actually knows what the hell it’s doing. Instead of those helpful-but-limited suggestions we’re used to, Agent Mode can generate complex spreadsheets and full documents from simple prompts. We’re talking “board-ready presentations” and work that Microsoft’s Sumit Chauhan says is “quite frankly, that a first-year consultant would do, delivered in minutes.” Here’s what’s wild about Agent Mode: it breaks down complex tasks into visible, step-by-step processes using OpenAI’s GPT-5 model. You can literally watch it work through problems in real time, like an automated macro that explains itself. For Excel (where data integrity actually matters), Microsoft has built in tight validation loops and claims a 57.2% accuracy rate on SpreadsheetBench – still behind human accuracy of 71.3%, but ahead of ChatGPT and Claude’s file-handling attempts. The Word version goes beyond the usual “make this sound better” rewrites. Agent Mode turns document creation into what Microsoft calls “vibe writing” – an interactive conversation where Copilot drafts content, suggests refinements, and clarifies what you need as you go. Think collaborative writing, but your writing partner has read the entire internet and never gets tired of your terrible first drafts. But here’s the really interesting move: Microsoft is also launching Office Agent in Copilot chat, powered by Anthropic models (not OpenAI). This thing can create full PowerPoint presentations and Word documents from chat prompts, complete with web research and live slide previews. It’s Microsoft’s answer to the flood of AI document tools trying to eat their lunch. The Anthropic integration is telling – Microsoft is hedging its OpenAI bets while exploring what different model families bring to the table. “We are committed to OpenAI, but we are starting to explore with the model family to understand the strength that different models bring,” Chauhan says. Smart move, considering Anthropic’s models are already powering GitHub Copilat and researcher tools. Agent Mode launches today for Microsoft 365 Copilot customers and Personal/Family subscribers (web versions first, desktop coming soon). Office Agent is U.S.-only for now. And yes, this means the office productivity wars just got a lot more interesting. Read more from Tom Warren at The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  2. -12 H

    Huawei Plans to Double AI Chip Production as Nvidia Stumbles in China

    Look, I know another chip story sounds like more tech industry inside baseball, but this one’s actually wild when you dig into what’s happening. Huawei Technologies is preparing to massively ramp up production of its most advanced AI chips over the next year—we’re talking about doubling output of their flagship Ascend 910B processors. And the timing? That’s the interesting part. While Nvidia is getting tangled up in geopolitical headwinds (export restrictions, compliance issues, the usual US-China tech drama), Huawei is essentially saying “hold our beer” and going full throttle on domestic AI silicon. Thing is, this isn’t just about making more chips—it’s about winning customers in what Bloomberg calls “the world’s biggest semiconductor market.” Here’s what makes this fascinating from a technical standpoint: The Ascend 910B isn’t some budget knockoff. We’re talking about chips that can genuinely compete with high-end GPUs for AI training workloads. Huawei has been quietly building this capability for years (remember, they’ve been dealing with US restrictions since 2019), and now they’re ready to scale production significantly. The broader context here is that China’s AI companies have been desperate for alternatives to Nvidia’s H100s and A100s. With export controls making it increasingly difficult to get the latest US chips, there’s been this massive pent-up demand for domestic alternatives. Huawei is basically positioned to fill that void—and they know it. What’s particularly smart about Huawei’s approach is the timing. As Nvidia navigates compliance requirements and export restrictions that slow down their China business, Huawei gets to swoop in with locally-produced chips that Chinese companies can actually buy without worrying about geopolitical complications. It’s like being the only restaurant open when everyone else is dealing with supply chain issues. The ripple effects could be huge. If Huawei can actually deliver on this production ramp (and that’s a big if—chip manufacturing is notoriously difficult to scale), we’re looking at a genuine alternative ecosystem for AI development in China. That means Chinese AI companies won’t be as dependent on US technology, which fundamentally changes the competitive landscape. Of course, there are still questions about performance parity and ecosystem support (CUDA is hard to replace), but the mere fact that a viable alternative exists puts pressure on everyone. Competition drives innovation, and having two major players fighting for the world’s largest AI chip market? That’s going to accelerate development on both sides. This is one of those stories where the technical development (doubling chip production capacity) intersects with geopolitics in ways that could reshape how AI infrastructure gets built globally. Worth watching closely. Read more from Yuan Gao at Bloomberg Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  3. -20 H

    Apple’s Secret AI Weapon: The Internal Chatbot That Should Be Public

    Look, I know another AI chatbot announcement sounds about as exciting as watching paint dry (we’ve had roughly 847 of them this year), but this one’s different. According to Bloomberg’s Mark Gurman, Apple has been quietly testing its next-generation Siri through an internal ChatGPT-style chatbot called “Veritas” — and honestly, it sounds kind of amazing. Here’s what makes this interesting: Instead of the usual corporate approach of endless closed-door testing, Apple built what’s essentially their own version of ChatGPT for employees to actually use. We’re talking back-and-forth conversations, the ability to dig deeper into topics, and crucially — the kind of functionality that Siri desperately needs. Employees can search through personal data and perform in-app actions like photo editing, which sounds suspiciously like what we were promised when Apple Intelligence was first announced. The thing is, Apple’s AI struggles aren’t exactly a secret at this point. The company has delayed the next-gen Siri multiple times, and Apple Intelligence launched to what can generously be called “tepid” reception. Meanwhile, every other tech company is shipping AI assistants that can actually hold a conversation without making you want to throw your phone out the window. But here’s where it gets frustrating: Gurman reports that Apple has no plans to release Veritas to consumers. Instead, they’re likely going to lean on Google’s Gemini for AI-powered search. Which feels backwards, right? You’ve built this internal tool that apparently works well enough for your employees to test new Siri features with, but regular users get… nothing? Think about the framework here: Apple has created a testing environment that lets them rapidly develop and collect feedback on AI features. That’s exactly the kind of iterative approach that made ChatGPT and other conversational AI successful. The difference is OpenAI, Anthropic, and Google let millions of users participate in that feedback loop. Apple is keeping it locked to their own employees. This feels like a missed opportunity on multiple levels. First, Apple could actually compete in the AI assistant space instead of just licensing someone else’s technology. Second, they’d get the kind of real-world usage data that makes these systems better. And third (this might be the most important part), it would give Apple Intelligence some actual credibility instead of the current situation where Siri still can’t reliably set multiple timers. The irony here is that Apple traditionally excels at taking complex technology and making it accessible to regular people. But with AI, they’re taking the opposite approach — building sophisticated tools for internal use while consumers get a watered-down experience that relies on external partnerships. Maybe Apple will surprise us and eventually release some version of Veritas publicly. But given their track record with AI announcements (remember when we were supposed to get the “new” Siri by now?), I’m not holding my breath. In the meantime, the rest of us will keep using ChatGPT while Apple employees get to play with what sounds like a genuinely useful AI assistant. Sources: The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  4. -1 J

    Gemini Robotics 1.5: Google DeepMind Just Cracked the Code on Agentic Robots

    Look, I know another AI model announcement sounds boring (trust me, I’ve written about 47 of them this month), but Google DeepMind just dropped something that actually made me sit up and pay attention. Their new Gemini Robotics 1.5 isn’t just another incremental upgrade—it’s a completely different approach to making robots that can think, plan, and adapt like actual agents in the real world. Here’s what’s wild: instead of trying to cram everything into one massive model (which, let’s be honest, has been the industry’s default approach), DeepMind split embodied intelligence into two specialized models. The ERVLA stack pairs Gemini Robotics-ER 1.5 for high-level reasoning with Gemini Robotics 1.5 for low-level motor control. Think of it like giving a robot both a strategic brain and muscle memory that can actually talk to each other. The “embodied reasoning” model (ER) handles the big picture stuff—spatial understanding, planning multiple steps ahead, figuring out if a task is actually working or failing, and even tool use. Meanwhile, the visuomotor learning agent (VLA) manages the precise hand-eye coordination needed to actually manipulate objects. The genius part? They can transfer skills between completely different robot platforms without starting from scratch. What does this look like in practice? These robots can now receive a high-level instruction like “prepare this workspace for the next task” and break it down into concrete steps: assess what’s currently there, determine what needs to move where, grab the right tools, and execute the plan while monitoring progress. If something goes wrong (like a tool slips or an object isn’t where expected), the reasoning model can replan on the fly. The technical breakthrough here is in the bidirectional communication between the two models. Previous approaches either had rigid, pre-programmed behaviors or tried to learn everything end-to-end (which works great in simulation but falls apart when you meet real-world complexity). This stack lets robots maintain both flexible high-level reasoning and precise low-level control. Here’s the framework for understanding why this matters: we’re moving from “task-specific robots” to “contextually intelligent agents.” Instead of programming a robot to do one thing really well, you can give it general capabilities and let it figure out how to apply them to novel situations. That’s the difference between a really good assembly line worker and someone who can walk into any workspace and immediately start being useful. The implications are pretty staggering when you think about it. Manufacturing environments that need flexible reconfiguration, household robots that can adapt to different homes and tasks, research assistants in labs that can understand experimental protocols—we’re talking about robots that can actually collaborate with humans rather than just following pre-written scripts. DeepMind demonstrated the system working across different robot embodiments, which solves one of the biggest practical problems in robotics: the fact that every robot design requires starting over with training. Now you can develop skills on one platform and transfer them to others, which could dramatically accelerate deployment timelines. This feels like one of those moments where we look back and say “that’s when robots stopped being fancy automation and started being actual agents.” The combination of spatial reasoning, dynamic planning, and transferable skills wrapped in a system that can actually explain what it’s doing? That’s not just an incremental improvement—that’s a fundamental shift in what’s possible. Read more from MarkTechPost Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  5. -1 J

    Holy Shit: 78 Examples Might Be All You Need to Build Autonomous AI Agents

    Look, I know we’re all tired of “revolutionary breakthrough” claims in AI (I write about them daily, trust me), but this one made me do a double-take. A new study is claiming that instead of the massive datasets we’ve been obsessing over, you might only need 78 carefully chosen training examples to build superior autonomous agents. Yeah, seventy-eight. Not 78,000 or 78 million—just 78. The research challenges one of our core assumptions about AI development: more data equals better performance. We’ve been in this escalating arms race of dataset sizes, with companies bragging about training on billions of web pages and trillions of tokens. But these researchers are saying “hold up, what if we’re doing this completely backwards?” Here’s what’s wild about their approach—they’re focusing on the quality and strategic selection of training examples rather than throwing everything at the wall. Think of it like this: instead of reading every book ever written to become a great writer, you carefully study 78 masterpieces and really understand what makes them work. (Obviously the analogy breaks down because AI training is way more complex, but you get the idea.) The implications here are honestly staggering. If this holds up under scrutiny, we’re looking at a fundamental shift in how we think about AI development. Smaller companies and researchers who can’t afford to scrape the entire internet suddenly have a path to building competitive agents. The environmental impact drops dramatically (no more burning through data centers to process petabytes). And development cycles could shrink from months to weeks or even days. Now, before we all lose our minds with excitement—and I’m trying really hard not to here—this is still early-stage research. The devil is always in the details with these studies. What specific tasks were they testing? How does this scale to different domains? What’s the catch that makes this “too good to be true”? (Because there’s always a catch.) But even if this only works for certain types of autonomous agents or specific problem domains, it’s a massive development. We’re potentially looking at democratization of AI agent development in a way we haven’t seen before. Instead of needing Google-scale resources, you might be able to build something genuinely useful with a laptop and really smart data curation. The broader trend here is fascinating too—we’re seeing efficiency breakthroughs across the board in AI right now. Better architectures, smarter training methods, and now potentially revolutionary approaches to data requirements. It’s like the field is maturing past the “throw more compute at it” phase and into the “work smarter, not harder” era. This is exactly the kind of research that could reshape the competitive landscape practically overnight. If you can build competitive agents with 78 examples instead of 78 million, suddenly every startup, research lab, and curious developer becomes a potential player in the autonomous agent space. Read more from THE DECODER Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  6. -1 J

    Google’s New Gemini 2.5 Flash-Lite Is Now the Fastest Proprietary AI Model (And 50% More Token-Efficient)

    Look, I know another Google model update sounds like Tuesday (because it basically is at this point), but this one actually deserves attention. Google just dropped an updated Gemini 2.5 Flash and Flash-Lite that’s apparently blazing past everything else in speed benchmarks—and doing it while using half the output tokens. The Flash-Lite preview is now officially the fastest proprietary model according to external tests (Google’s being appropriately coy about the specific numbers, but third-party benchmarks don’t lie). What’s wild is they managed this while also making it 50% more token-efficient on outputs. In the world of AI economics, that’s like getting a sports car that also gets better gas mileage. Here’s the practical framework for understanding why this matters: Speed and efficiency aren’t just nice-to-haves in AI—they’re the difference between a tool you actually use and one that sits there looking impressive. If you’ve ever waited 30 seconds for a chatbot response and started questioning your life choices, you get it. The efficiency gains are particularly interesting (okay, I’m about to nerd out here, but stick with me). When a model uses fewer output tokens to say the same thing, that’s not just cost savings—it’s often a sign of better reasoning. Think of it like the difference between someone who rambles for ten minutes versus someone who gives you the perfect two-sentence answer. The latter usually understands the question better. Google’s also rolling out “latest” aliases (gemini-flash-latest and gemini-flash-lite-latest) that automatically point to the newest preview versions. For developers who want to stay on the bleeding edge without manually updating model names, that’s genuinely helpful. Though they’re smart to recommend pinning specific versions for production—nobody wants their app breaking because Tuesday’s model update changed how it handles certain prompts. The timing here is telling too. While everyone’s been focused on capability wars (who can write the best poetry or solve the hardest math problems), Google’s doubling down on making AI actually practical. Speed and efficiency improvements like this make AI tools viable for applications where they weren’t before—real-time responses, mobile apps, embedded systems. What’s particularly clever is how they’re positioning this as infrastructure improvement rather than just another model announcement. Because that’s what it really is: making the whole stack work better so developers can build things that were previously too slow or expensive to be practical. The real test will be seeing what developers build with this. Faster, more efficient models don’t just make existing applications better—they enable entirely new categories of applications that weren’t feasible before. And that’s where things get genuinely exciting. Read more from MarkTechPost Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  7. -1 J

    When Automation Hubris Meets Reality (And Your Ears Pay the Price)

    So, uh… remember last week when my podcast episodes sounded like they were being delivered by a caffeinated robot having an existential crisis? Yeah, that was my bad. Time for some real talk. I got supremely clever (narrator voice: he was not clever) and decided to automate my AI news updates with what I thought was a brilliant optimization: brutal character limits. The logic seemed flawless – shorter equals punchier, right? More digestible content for busy people who want their AI news fast and efficient. Turns out, I basically turned my podcast into audio haikus. Instead of coherent stories about actual AI breakthroughs, you got these breathless, chopped-up fragments that sounded like I was reading telegrams from 1942. (Stop. OpenAI releases new model. Stop. Very exciting. Stop. Cannot explain why. Stop.) The automation was cutting mid-sentence, dropping all context, making everything sound like robotic bullet points instead of, you know, actual human excitement about genuinely cool developments. I was so focused on efficiency that I forgot the whole point: helping people understand WHY these AI developments actually matter. Here’s the thing about trying to explain quantum computing breakthroughs in tweet-length bursts – it doesn’t work. Context is everything. The story isn’t just “new AI model released.” The story is what it means, why it’s different, and what happens next. All the stuff my overly aggressive character limits were brutally murdering. (Look, I’m doing my best here – constantly tweaking, testing, trying to find that sweet spot between efficiency and actually being worth your time. This week’s experiment? Total failure. But hey, at least now we have definitive proof that 30-second AI updates missing half their words are objectively terrible.) Going forward, we’re giving these stories room to breathe. Enough space to explain the ‘so what’ instead of just barking facts at you like some malfunctioning tech ticker. Your ears deserve better than my automation hubris, and you’re gonna get it. Thanks for sticking with me while I learned this lesson the hard way. Sometimes the best optimization is just… not optimizing quite so aggressively. Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  8. -2 J

    Amazon’s Fall Event Could Finally Deliver the AI Assistant We Actually Want

    Look, I know another tech company hardware event sounds about as exciting as watching paint dry (especially when we’ve been buried under a mountain of product launches this month). But Amazon’s fall showcase next Tuesday might actually be worth paying attention to — and not just because Panos Panay is bringing his Microsoft Surface magic to the Echo ecosystem. The invite dropped some not-so-subtle hints that scream “we’re finally ready to show you what AI can do in your living room.” Two products sporting Amazon’s iconic blue ring suggest new Echo speakers, while a colorized Kindle logo practically shouts “yes, we fixed the color display issues.” But here’s what has me genuinely intrigued: tiny text mentioning “stroke of a pen” points to a color Kindle Scribe, and more importantly, whispers about Vega OS. Here’s the framework for understanding why this matters: Amazon has been quietly building Vega OS as a replacement for Android on their devices. It’s already running on Echo Show 5, Echo Hub displays, and the Echo Spot. If they use this event to announce Vega OS for TVs (which industry reports suggest could happen as soon as this week), we’re looking at Amazon making a major play for independence from Google’s ecosystem while potentially delivering much faster, more responsive smart TV experiences. The real excitement, though, is around Alexa Plus. I got a brief hands-on earlier this year, and while it’s still rolling out in early access, the difference between traditional Alexa and this AI-powered version is like comparing a flip phone to an iPhone (okay, maybe not that dramatic, but you get the idea). We’re talking about an assistant that can actually understand context, handle follow-up questions without losing track, and potentially integrate with all these new devices in genuinely useful ways. Think about it: a color Kindle Scribe that could work with an AI assistant to help you organize notes, research topics, or even generate study guides. New Echo speakers that don’t just play music but actually understand what you’re trying to accomplish when you walk in the room. Smart TVs running Vega OS that could potentially offer AI-curated content recommendations without the lag and bloat of Android TV. Of course, Amazon has a history of launching quirky products that end up in the tech graveyard (RIP Echo Buttons, Echo Wall Clock, and that Alexa microwave that nobody asked for). But under Panay’s leadership, they’ve been taking more focused swings. The 2024 Kindle lineup was genuinely impressive, even if the Colorsoft had some launch hiccups with discoloration issues they had to patch. Here’s what I’m watching for: Can Amazon finally deliver an AI ecosystem that feels integrated rather than just a collection of voice-activated gadgets? The pieces are there — better displays, more powerful processing, an AI assistant that might actually be intelligent, and a custom OS that could tie it all together without Google’s strings attached. We’ll find out Tuesday if Amazon is ready to make good on the promise of actually smart smart home devices, or if we’re getting another batch of incrementally better gadgets that still can’t figure out why I asked about the weather when I’m clearly about to leave the house. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

À propos

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

Vous aimeriez peut‑être aussi