Unsupervised Ai News

Limited Edition Jonathan

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

  1. 50 PHÚT TRƯỚC

    The Real ChatGPT Users Just Dropped Some Plot Twists

    Holy shit, we finally have actual data on who’s using ChatGPT and what they’re doing with it – and it’s not what anyone expected. OpenAI just released what they’re calling the largest study yet on ChatGPT usage patterns, and honestly? The results are fascinating in ways that make me rethink everything I thought I knew about AI adoption. (Yes, I’m about to nerd out over usage statistics. Bear with me.) First shocker: Most people aren’t using ChatGPT for work. Like, at all. In June 2025, 73% of ChatGPT conversations were completely non-work related – that’s up from 53% just a year earlier. We’re talking about a massive shift toward personal use cases that nobody saw coming. Here’s what’s even wilder: The gender gap has completely flipped. While men dominated early ChatGPT usage, women now make up 52% of users (based on first names in the data), jumping from just 37% in January 2024. That’s not a gradual shift – that’s a demographic avalanche. And the age thing? Yeah, younger users are still the core (46% of messages), but here’s where it gets interesting: People aren’t really asking ChatGPT to DO things for them. They’re asking it for advice and information. Around half of all messages are “hey, what do you think about…” rather than “please write this thing for me.” The practical breakdown is telling too. At work, writing dominates (40% of conversations), which makes total sense – everyone’s trying to polish their emails and reports. But for personal use? Writing has actually dropped to third place. People are using ChatGPT more like a really smart friend who knows everything: asking for practical guidance, seeking information, getting help with random life stuff. There are even gender differences in usage patterns (because of course there are). Users with feminine names gravitate toward writing and practical guidance, while users with masculine names are more likely to seek technical help or use multimedia features. It’s like ChatGPT has become this weird mirror of how different people naturally approach problem-solving. Look, I know usage statistics sound boring on paper, but this data is actually revolutionary. It shows AI isn’t replacing human work as much as it’s becoming this weird hybrid of search engine, therapist, and knowledgeable friend. The fact that non-work usage is growing this fast suggests people are finding genuinely personal value in these tools – not just productivity hacks, but actual life enhancement. This is the kind of organic adoption that you can’t manufacture with marketing campaigns. When 73% of usage is personal and the user base is rapidly diversifying, that’s not hype – that’s infrastructure becoming invisible. (Which is exactly what happens when technology actually works.) Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  2. 2 GIỜ TRƯỚC

    YouTube Just Made Every Creator Their Own Analytics Expert (and Algorithm Whisperer)

    Look, I know another “AI tools for creators” announcement sounds about as exciting as watching paint dry, but YouTube just dropped something that actually makes me sit up and pay attention. They’ve basically built creators their own personal data scientist—and it might fundamentally change how content gets made. Meet Ask Studio, YouTube’s new AI chatbot that lives inside YouTube Studio and can answer questions like “Why did people stop watching at the 3-minute mark?” or “What are my viewers saying about this video?” Instead of creators drowning in analytics dashboards (because let’s be honest, who has time to decode all that data?), they can just… ask. In plain English. But here’s where it gets interesting: Ask Studio doesn’t just summarize your stats. It pulls data from across your entire channel—long-form videos, Shorts, comments, the works—and can suggest actual video ideas based on what your audience is asking for. One creator told YouTube they’re using it to mine their comment sections for content ideas, then asking for title suggestions based on those insights. It’s like having a really smart intern who’s read every single one of your comments and remembers everything. (The tool can’t spy on your competitors yet, so you can’t ask “What’s MrBeast doing that I’m not?” But honestly, that’s probably for the best.) YouTube’s also rolling out thumbnail and title A/B testing that goes beyond their previous version. Now you can test thumbnail-title combinations together and let the algorithm pick the winner based on watch time. Ashley Alexander, a lifestyle creator who got early access, says she’s using the thumbnail testing for every single video now because “no matter how good the video is, the thumbnail and title is what gets people to even see it.” This is actually a pretty significant shift when you think about it. For years, creators have been playing detective, trying to reverse-engineer what the algorithm wants through trial and error. Remember all those YouTube tutorials about “the perfect thumbnail face” or “keywords that get views”? Now the platform itself is essentially saying “Here, let us help you optimize for our own system.” The cynical take? YouTube is directly shaping what gets made, potentially homogenizing content as everyone uses the same optimization tools. The optimistic take? Creators get to spend less time being data analysts and more time being, well, creative. YouTube’s also expanding their auto-dubbing feature to include lip-sync matching (your lips will actually move in sync with the dubbed language, which is wild), plus adding collaborative posting so multiple creators can share metrics on joint videos. Thing is, this raises fascinating questions about authenticity versus optimization. If everyone’s using AI to generate ideas, test thumbnails, and optimize titles, who’s really standing out? But talking to creators like Alexander, it seems like they’re treating these as starting points, not replacement creativity. The AI might suggest the video topic, but you still have to make something people actually want to watch. The tools are rolling out gradually, starting with Ask Studio for select creators. And honestly? This feels like YouTube acknowledging that the creator economy has gotten complex enough that people need actual help navigating it—not just more dashboards to decipher. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  3. 6 GIỜ TRƯỚC

    Business Insider Tells Journalists They Can Use AI to Write First Drafts (And Won’t Tell Readers)

    Look, I know we’ve all been waiting for this moment—the first major news outlet to officially say “yeah, just let ChatGPT write your stories.” Well, it’s happened (and honestly, I’m surprised it took this long). Business Insider just told its journalists they can use AI to create first drafts of articles, and here’s the kicker: they probably won’t tell readers when they do it. According to reporting from Status, the policy was rolled out in an internal memo last week that treats AI “like any other tool” for research, image editing, and yes—writing initial drafts. Thing is, this feels like crossing a pretty significant line in journalism. The memo’s FAQ reportedly addresses the elephant in the room directly: “Can I use AI to write first drafts?” Answer: “Yes, but you must make sure your final work is yours.” (Translation: the byline is still your responsibility, even if the first thousand words came from a chatbot.) What’s wild is the transparency approach—or lack thereof. While entirely AI-generated content would get disclaimers, articles where AI helped with the initial draft apparently won’t. The reasoning seems to be that if a human journalist rewrites and takes responsibility for the final product, it’s still “their” work. Here’s the framework for understanding why this matters: journalism has always used tools to help with the process—spell check, research databases, even voice transcription software. But those tools don’t generate the actual ideas, structure, or initial language. This is different. We’re talking about AI potentially providing the skeleton (or more) that journalists then flesh out. The irony wasn’t lost on me that Business Insider knows firsthand how this can go wrong—they published AI-generated stories from a supposed freelancer this summer without realizing it. Now they’re essentially institutionalizing a version of that process, just with more human oversight. To their credit, Business Insider has been aggressively embracing AI across their business. They appointed an AI newsroom lead, built AI search tools, and parent company Axel Springer has licensing deals with OpenAI and Microsoft. This policy feels like the logical next step in that strategy. The real question isn’t whether journalists will use AI (they already are, whether officially sanctioned or not), but whether readers deserve to know when they do. Business Insider seems to be betting that as long as a human takes final responsibility, the process doesn’t matter. We’re about to find out if their audience agrees. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  4. 17 GIỜ TRƯỚC

    Microsoft Just Quietly Admitted Claude Beats GPT-5 for Coding

    Look, I know another AI model announcement sounds boring, but this one’s actually wild. Microsoft just rolled out automatic model selection for Visual Studio Code, and guess what? For paid GitHub Copilot users, it will “primarily rely on Claude Sonnet 4.” Not GPT-5. Not their own partner’s latest model. Claude. This is Microsoft—the company that’s invested $13 billion in OpenAI—basically saying “yeah, Anthropic’s model is better for the thing developers actually care about most: writing code.” The quiet part is now loud. Here’s the framework for understanding why this matters: Visual Studio Code is probably the most popular code editor on the planet (used by something like 70% of developers). When Microsoft chooses which AI model to default to for millions of coding sessions daily, that’s not a casual decision. That’s a data-driven admission about which model actually performs. The really interesting bit? Sources familiar with Microsoft’s developer plans tell The Verge that the company has been “instructing its own developers to use Claude Sonnet 4 in recent months.” So Microsoft’s own engineers were already jumping ship from GPT-5 for their day-to-day work. Think about the awkwardness here. Microsoft has to maintain its public partnership with OpenAI while privately acknowledging that Anthropic is eating their lunch in one of the most important AI use cases. It’s like Apple partnering with Google for search while quietly admitting Bing is better (which, thankfully for everyone, has never happened). For developers, this is actually great news. The auto-selection feature will pick between Claude Sonnet 4, GPT-5, GPT-5 mini and other models for “optimal performance,” meaning you get the best tool for the job without having to think about it. Free users get the automatic switching, paid users get Claude as the primary choice. But the broader implication is huge: we’re seeing the first major cracks in the OpenAI-Microsoft dominance narrative. When your biggest partner starts defaulting to your competitor for their flagship developer tool, that’s not just a product decision—that’s a market signal. The coding AI race is far from over, and apparently, it’s not going the way everyone expected. Turns out the company that talks less about AGI and more about safety might have just built the better coding assistant. Who would have thought? Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  5. 1 NGÀY TRƯỚC

    USA Today’s AI Chatbot Is Either Brilliant or Completely Unhinged (Probably Both)

    Look, I know what you’re thinking: “Another media company launches an AI chatbot, how groundbreaking.” But USA Today’s new DeeperDive tool is actually doing something interesting here (and slightly unhinged, which makes it even better). Instead of just slapping ChatGPT into their website and calling it a day, USA Today built a chatbot that actively converses with readers about their stories. Think of it as that friend who read the entire article while you skimmed the headline, except this friend has access to the entire internet and never gets tired of explaining why cryptocurrency regulations matter to your daily life. Here’s what’s wild: DeeperDive doesn’t just answer questions about articles—it proactively engages readers in discussions about the content. According to reports, the tool can break down complex stories, provide additional context, and even challenge readers’ assumptions (politely, one assumes). It’s like having a really well-informed debate partner who’s read everything and remembers it all. The timing here is fascinating. While news organizations are getting sued left and right over AI companies scraping their content (looking at you, Google AI Overviews lawsuit), USA Today is essentially saying “Fine, we’ll build our own AI and use it to make our content MORE valuable, not less.” It’s a “can’t beat them, join them but do it better” approach that actually makes sense. Thing is, this could represent a fundamental shift in how we consume news. Instead of reading an article and moving on, DeeperDive wants to turn every story into a learning experience. Imagine reading about, say, a new climate policy and having an AI immediately available to explain the economic implications, historical context, and potential counterarguments. That’s not just news consumption—that’s news education. The real test will be whether people actually want this level of engagement with their news (spoiler: some absolutely will, others will run screaming). But if USA Today can pull this off—creating an AI that enhances rather than replaces traditional journalism—they might just have figured out how to make AI work FOR media companies instead of against them. Early reports suggest the tool is still finding its voice (aren’t we all), but the concept is solid. We’re potentially looking at the future of informed citizenship: AI that doesn’t just give you information, but helps you understand why it matters and what you should do with it. Read more from Will Knight at WIRED Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  6. 1 NGÀY TRƯỚC

    Microsoft Just Made AI Actually Useful for Everyone (Finally)

    Look, I know another Microsoft AI announcement sounds like watching paint dry, but holy shit—they just did something that actually matters. Starting today, every Microsoft 365 business user gets Copilot Chat built right into Word, Excel, PowerPoint, Outlook, and OneNote. For free. No additional license required. Here’s what’s wild: this isn’t some watered-down demo version. We’re talking about AI that can rewrite your documents, analyze spreadsheets, help create PowerPoint slides, and understand the context of whatever file you’re working on. It’s like having that impossibly helpful colleague who actually knows what you’re doing (and doesn’t judge your 47 versions of “Final_Report_ACTUALLY_FINAL_v3.docx”). “Copilot Chat is secure AI chat grounded in the web—and now, it’s available in the Microsoft 365 apps,” explains Seth Patton, general manager of Microsoft 365 Copilot product marketing. “It’s content aware, meaning it quickly understands what you’re working on, tailoring answers to the file you have open.” Now, before you cancel that $30-per-month Microsoft 365 Copilot subscription (patience, grasshopper), the premium version still has serious advantages. The paid tier isn’t limited to single documents—it can reason across your entire work data universe, gets priority access during peak times, and includes fancy features like file uploads and image generation. Think of the free version as “AI that helps with the document you’re staring at” versus “AI that knows your entire digital work life.” The timing here is fascinating (and probably not coincidental). Microsoft previously tried bundling AI features into consumer Office plans earlier this year, but they jacked up subscription prices at the same time—classic move. This time? No price increases for businesses. They’re also prepping to bundle their sales, service, and finance Copilots into the main subscription this October, potentially making the premium tier more attractive for companies already deep in the Microsoft ecosystem. What makes this genuinely exciting isn’t just the “free AI in Office” angle—it’s that Microsoft is finally making AI feel like a natural part of the workflow rather than a separate thing you have to remember to use. The sidebar integration means you’re not context-switching between apps or losing your train of thought. You’re just working, and the AI is… there, ready to help when you need it. This feels like the moment AI goes from “cool tech demo” to “tool I actually use every day.” Sure, it won’t replace human creativity or strategic thinking (yet), but for the daily grind of document drafting, data analysis, and presentation polishing? Game changer. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  7. 2 NGÀY TRƯỚC

    Google’s Nano Banana Image Editor Is Having Its Viral Moment (And Yes, Everyone’s Making Figurines)

    Look, I know another AI model announcement sounds boring, but Google’s Nano Banana image editor is actually breaking through the noise in a way that matters. The Gemini app just added 23 million users in two weeks, and people have transformed 500 million images with it. That’s not hype—that’s actual adoption. Here’s what’s wild: Gemini is now the #1 app on iPhone App Stores across the US, UK, Canada, France, Australia, Germany, and Italy. It literally knocked ChatGPT down to second place (which, honestly, feels like a seismic shift after ChatGPT dominated for so long). The runaway hit? People are turning themselves into 3D figurines. I’m talking hyperrealistic desktop collectibles complete with packaging boxes and design wireframes on computer screens behind them. It sounds ridiculous until you see the results—they actually look like miniature versions of real people, not the uncanny valley nightmares you’d expect. Thing is, this isn’t just about novelty (though the figurine trend is everywhere). Josh Woodward, Google’s VP for Gemini and Google Labs, had to implement “temporary limits” because demand got so extreme. “It’s a full-on stampede,” he said, with the team “doing heroics to keep the system up and running.” India apparently “found” the image editor and broke their servers. What makes Nano Banana different from other image editing tools is speed and simplicity. No waiting around for results like with ChatGPT’s image tools. You drop in a photo, tell it what you want changed (redecorate your house, give yourself a ’60s beehive, put a tutu on your chihuahua), and it actually delivers something usable. Here’s the framework for understanding why this matters: most AI image editing tools either take forever, completely change your facial features into something creepy, or just ignore your prompt entirely. Nano Banana seems to have solved the “still looks like you” problem that’s plagued other tools. As The Verge notes, the results are “still quite recognizably me” rather than disturbing facsimiles. The real significance isn’t the figurines going viral (though that’s fun)—it’s that Google finally has an AI tool people actually want to use daily. Not for work tasks or productivity theater, but because it’s genuinely entertaining and delivers on its promises quickly. That’s how you build the kind of engagement that translates to platform dominance. Sources: The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  8. 2 NGÀY TRƯỚC

    Google’s Gemini 2.0 Flash Thinking Just Dropped and It’s Actually Showing Its Work

    Okay, look—I know another AI model announcement sounds about as exciting as watching paint dry (and trust me, I’ve covered approximately 847 of these this month), but Google just released something genuinely interesting with Gemini 2.0 Flash Thinking. Instead of just spitting out answers like every other model, this one actually shows you its reasoning process in real-time. Here’s what’s wild: you can literally watch the model think through problems step by step. Ask it a complex math problem or logical puzzle, and instead of getting a mysterious final answer, you see the entire thought process unfold—the false starts, the corrections, the “wait, let me reconsider this” moments. It’s like having a study buddy who thinks out loud (except this one doesn’t steal your snacks). The technical breakthrough here is in what researchers call “chain of thought” reasoning, but made visible. Traditional models do this internal reasoning too, but it’s hidden behind the scenes. Gemini 2.0 Flash Thinking exposes that process, which has some pretty massive implications for trust and verification. When an AI tells you something, you can actually see how it got there. Multiple sources confirm this isn’t just a gimmick—early testing shows significantly improved accuracy on complex reasoning tasks. One developer testing the system noted they “didn’t sleep for three days” exploring how the visible reasoning could change debugging and validation workflows. (Honestly, same energy I have when any AI tool actually works as advertised.) Think of it like the difference between a calculator that just shows “42” versus one that shows “(7 × 6) = 42.” Except instead of simple arithmetic, we’re talking about legal analysis, code debugging, scientific reasoning, and medical diagnosis support. The transparency isn’t just nice-to-have—it’s potentially game-changing for high-stakes applications where you need to verify the AI’s logic. The model is available through Google AI Studio right now, which means developers can start building with it immediately (no waitlist limbo, thank god). Early reports suggest it’s particularly strong at mathematical reasoning, logical puzzles, and multi-step problem solving—basically the stuff that traditionally trips up language models. Here’s the framework for understanding why this matters: AI adoption has been held back partly by the “black box” problem. How do you trust a system when you can’t see its reasoning? This approach doesn’t solve everything (the model can still be wrong, just transparently wrong), but it’s a significant step toward AI systems that can actually explain themselves in ways humans can evaluate. What we’re seeing here is Google making a direct play for enterprise and professional users who need accountability in their AI tools. When the reasoning is visible, it becomes much easier to spot where the model goes off track and course-correct. That’s huge for adoption in fields like healthcare, legal work, and financial analysis where “trust me, bro” isn’t an acceptable explanation. Sources: The Verge and Ars Technica Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

Giới Thiệu

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

Có Thể Bạn Cũng Thích