Unsupervised Ai News

Limited Edition Jonathan

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

  1. قبل ١٤ ساعة

    Microsoft Is Turning Foxconn’s Failed Factory Into the ‘World’s Most Powerful’ AI Data Center

    Look, I know another data center announcement sounds about as exciting as watching concrete dry, but holy shit, Microsoft just announced something that made me do a double-take. They’re converting Foxconn’s infamous Wisconsin boondoggle—you know, that massive LCD factory that never really happened—into what they claim is the “world’s most powerful AI data center.” The Fairwater facility (coming online early 2026) is absolutely massive: 1.2 million square feet spread across three buildings, housing “hundreds of thousands” of Nvidia’s latest GB200 GPUs. To put that in perspective, Microsoft says they’ve connected these chips with enough fiber to circle Earth 4.5 times. That’s not just scale—that’s the kind of infrastructure that makes your home Wi-Fi router weep in shame. Here’s what’s wild about the technical setup: this interconnected GPU cluster is supposedly ten times more powerful than the fastest supercomputer. We’re talking about a $3.3 billion investment that could fundamentally change how AI models get trained. The sheer compute density they’re achieving here (and the networking required to make it work) represents a legitimate leap in what’s technically possible for large-scale AI training. But there’s also a beautiful irony here. Remember when Foxconn promised Wisconsin a revolutionary LCD factory in 2017? By 2018, that project was already being called a “boondoggle” because it never delivered the promised manufacturing jobs or economic impact. Now Microsoft is essentially saying: “Hold our beer, we’ll show you what revolutionary actually looks like.” The timing matters too. While everyone’s focused on which AI model can write better poems, Microsoft is quietly building the infrastructure that could enable the next generation of AI capabilities. This isn’t just about training bigger models—it’s about training them faster, more efficiently, and at a scale that could unlock entirely new approaches to AI development. Microsoft is also making a big deal about the environmental angle (because they have to). They’re using a closed-loop cooling system that only needs to be filled once, eliminating water waste through evaporation. Whether that’s enough to offset the absolutely staggering energy consumption remains to be seen, but at least they’re acknowledging that “hundreds of thousands of GPUs” might have a slight environmental impact. Thing is, this facility represents more than just raw compute power. It’s Microsoft betting that the future of AI isn’t just about smarter algorithms—it’s about having the infrastructure to train and run models that are currently impossible. Multiple other Fairwater data centers are already under construction across the US, suggesting this is just the beginning of a much larger infrastructure buildout. The real question isn’t whether this will be powerful (it will be), but what becomes possible when you have this much coordinated compute in one place. We might be looking at the kind of infrastructure that enables AI breakthroughs we can’t even imagine yet. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  2. قبل ١٦ ساعة

    Reddit Wants Google to Send Users Back After Feeding AI Their Content

    Reddit is back at the negotiating table with Google, and this time they’re not just asking for more money – they want users. The platform is reportedly pushing for a new kind of AI licensing deal that goes beyond the typical “pay us for our data” arrangement into something more like “pay us AND help us not die in the process.” According to Bloomberg, Reddit executives are eyeing a much bigger role in Google’s AI ecosystem, one year after their initial $60 million annual data deal. The key twist? They want Google to actively funnel users back to Reddit’s forums after serving up AI-generated answers sourced from Reddit posts. (Because what good is getting paid for your data if the AI kills your traffic in the process?) Here’s the thing: Reddit is in a uniquely strong position to make these demands. Their content has become absolutely critical to AI training – we’re talking about real people having genuine conversations, sorted by topic, and ranked by actual humans rather than algorithms. Data suggests Reddit is the most-cited domain for AI tools like Perplexity and Google’s AI Overviews. Adding “reddit” to Google searches has basically become the internet’s unofficial “show me real answers” hack. Reddit is also reportedly considering dynamic pricing for future deals, where payments would fluctuate based on how useful or important their content is to AI-generated responses. It’s like surge pricing, but for your forum posts about the best pizza in Brooklyn or why your houseplant keeps dying. This negotiation highlights the central paradox of AI licensing deals: platforms like Reddit hold treasure troves of data that tech companies desperately need, but those same AI models are strangling the traffic and engagement that made the data valuable in the first place. Users get their Reddit-sourced answer from Google’s AI and never actually visit Reddit – which means fewer posts, less engagement, and ultimately less valuable data for future deals. The push for traffic-sharing arrangements shows content platforms are waking up to this dynamic. They’re realizing that selling their data is only sustainable if they can maintain the communities that generate new data. It’s not enough to get paid once for existing content; you need to preserve the ecosystem that creates tomorrow’s content. What’s particularly smart about Reddit’s approach is recognizing they have leverage. In an internet increasingly filled with AI-generated content and SEO spam, genuine human discussions become more valuable, not less. Reddit’s voting system, topic organization, and community moderation create exactly the kind of high-quality, contextual data that AI companies need but can’t easily replicate. Whether Google agrees to these terms remains to be seen, but this negotiation could set a precedent for how content platforms and AI companies structure future partnerships. Because at some point, everyone’s going to need to figure out how to feed the AI without killing the golden goose that lays the data eggs. Read more from Bloomberg and The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  3. قبل ٢٠ ساعة

    Meta Just Quietly Revolutionized Smart Glasses (And Nobody Saw It Coming)

    Look, I know we’ve been hearing about “the future of smart glasses” for what feels like a decade now (Google Glass, anyone?), but Meta just dropped something that actually made me stop scrolling and pay attention. They didn’t just announce new smart glasses at Connect 2025—they announced four different pairs, each solving a completely different problem we didn’t even realize needed solving. The headline grabber is the Ray-Ban Display glasses ($799), which finally put a full-color screen in the right lens without looking like you strapped a smartphone to your face. But here’s what’s wild: you control everything with a neural wristband. Scroll through messages, take video calls, get walking directions, all with hand gestures while the display stays completely private to you. Six hours of battery life, 30 with the charging case. (I’m already imagining the heated debates about wristband etiquette in meetings.) Thing is, Meta didn’t stop there. The second-gen Ray-Ban Meta glasses doubled the battery life to eight hours and now shoot 3K video at 60fps—basically turning your face into a surprisingly capable content creation rig. Then there’s the Oakley Meta Vanguard ($499) for athletes, with IP67 water resistance and direct integration with Garmin and Strava. Ask Meta AI about your fitness stats mid-workout without pulling out your phone. (“Hey Meta, what’s my heart rate?” while you’re dying on a bike climb might be peak 2025 behavior.) But the technical achievement that’s got me genuinely excited? The “conversation focus” feature rolling out to all Meta glasses. It uses AI to amplify the voice of whoever’s speaking to you in noisy environments—essentially giving you superhuman hearing in crowded restaurants or conferences. As one developer put it after trying it: “It’s like having subtitles for real life, but for your ears.” The broader play here is brilliant (and slightly terrifying in its ambition). Meta isn’t just making smart glasses—they’re building the infrastructure for ambient computing. The Ray-Ban Display glasses can preview photos before you take them, the Vanguard can read your biometrics during a workout, and all of them can process what you’re looking at in real-time. We’re looking at the early stages of always-on AI assistants that actually understand your context. Yeah, the privacy implications are massive (every participant sees when AI features are active, but still), and yes, $799 for glasses you’ll inevitably lose or break is painful. But for the first time, smart glasses feel less like a tech demo and more like… well, just better glasses that happen to be smart. The Ray-Ban Display models hit stores September 30th, which means we’re about to find out if regular humans are actually ready for screens on their faces. Here’s what I’m watching: if Meta can nail the social dynamics (when is it rude to record? how obvious should AI assistance be?), they might have just leapfrogged everyone else in wearable computing. The future where your glasses know more about your day than your phone? It’s not coming—it just arrived at Best Buy. Sources: The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  4. قبل يوم واحد

    The Real ChatGPT Users Just Dropped Some Plot Twists

    Holy shit, we finally have actual data on who’s using ChatGPT and what they’re doing with it – and it’s not what anyone expected. OpenAI just released what they’re calling the largest study yet on ChatGPT usage patterns, and honestly? The results are fascinating in ways that make me rethink everything I thought I knew about AI adoption. (Yes, I’m about to nerd out over usage statistics. Bear with me.) First shocker: Most people aren’t using ChatGPT for work. Like, at all. In June 2025, 73% of ChatGPT conversations were completely non-work related – that’s up from 53% just a year earlier. We’re talking about a massive shift toward personal use cases that nobody saw coming. Here’s what’s even wilder: The gender gap has completely flipped. While men dominated early ChatGPT usage, women now make up 52% of users (based on first names in the data), jumping from just 37% in January 2024. That’s not a gradual shift – that’s a demographic avalanche. And the age thing? Yeah, younger users are still the core (46% of messages), but here’s where it gets interesting: People aren’t really asking ChatGPT to DO things for them. They’re asking it for advice and information. Around half of all messages are “hey, what do you think about…” rather than “please write this thing for me.” The practical breakdown is telling too. At work, writing dominates (40% of conversations), which makes total sense – everyone’s trying to polish their emails and reports. But for personal use? Writing has actually dropped to third place. People are using ChatGPT more like a really smart friend who knows everything: asking for practical guidance, seeking information, getting help with random life stuff. There are even gender differences in usage patterns (because of course there are). Users with feminine names gravitate toward writing and practical guidance, while users with masculine names are more likely to seek technical help or use multimedia features. It’s like ChatGPT has become this weird mirror of how different people naturally approach problem-solving. Look, I know usage statistics sound boring on paper, but this data is actually revolutionary. It shows AI isn’t replacing human work as much as it’s becoming this weird hybrid of search engine, therapist, and knowledgeable friend. The fact that non-work usage is growing this fast suggests people are finding genuinely personal value in these tools – not just productivity hacks, but actual life enhancement. This is the kind of organic adoption that you can’t manufacture with marketing campaigns. When 73% of usage is personal and the user base is rapidly diversifying, that’s not hype – that’s infrastructure becoming invisible. (Which is exactly what happens when technology actually works.) Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  5. قبل يوم واحد

    YouTube Just Made Every Creator Their Own Analytics Expert (and Algorithm Whisperer)

    Look, I know another “AI tools for creators” announcement sounds about as exciting as watching paint dry, but YouTube just dropped something that actually makes me sit up and pay attention. They’ve basically built creators their own personal data scientist—and it might fundamentally change how content gets made. Meet Ask Studio, YouTube’s new AI chatbot that lives inside YouTube Studio and can answer questions like “Why did people stop watching at the 3-minute mark?” or “What are my viewers saying about this video?” Instead of creators drowning in analytics dashboards (because let’s be honest, who has time to decode all that data?), they can just… ask. In plain English. But here’s where it gets interesting: Ask Studio doesn’t just summarize your stats. It pulls data from across your entire channel—long-form videos, Shorts, comments, the works—and can suggest actual video ideas based on what your audience is asking for. One creator told YouTube they’re using it to mine their comment sections for content ideas, then asking for title suggestions based on those insights. It’s like having a really smart intern who’s read every single one of your comments and remembers everything. (The tool can’t spy on your competitors yet, so you can’t ask “What’s MrBeast doing that I’m not?” But honestly, that’s probably for the best.) YouTube’s also rolling out thumbnail and title A/B testing that goes beyond their previous version. Now you can test thumbnail-title combinations together and let the algorithm pick the winner based on watch time. Ashley Alexander, a lifestyle creator who got early access, says she’s using the thumbnail testing for every single video now because “no matter how good the video is, the thumbnail and title is what gets people to even see it.” This is actually a pretty significant shift when you think about it. For years, creators have been playing detective, trying to reverse-engineer what the algorithm wants through trial and error. Remember all those YouTube tutorials about “the perfect thumbnail face” or “keywords that get views”? Now the platform itself is essentially saying “Here, let us help you optimize for our own system.” The cynical take? YouTube is directly shaping what gets made, potentially homogenizing content as everyone uses the same optimization tools. The optimistic take? Creators get to spend less time being data analysts and more time being, well, creative. YouTube’s also expanding their auto-dubbing feature to include lip-sync matching (your lips will actually move in sync with the dubbed language, which is wild), plus adding collaborative posting so multiple creators can share metrics on joint videos. Thing is, this raises fascinating questions about authenticity versus optimization. If everyone’s using AI to generate ideas, test thumbnails, and optimize titles, who’s really standing out? But talking to creators like Alexander, it seems like they’re treating these as starting points, not replacement creativity. The AI might suggest the video topic, but you still have to make something people actually want to watch. The tools are rolling out gradually, starting with Ask Studio for select creators. And honestly? This feels like YouTube acknowledging that the creator economy has gotten complex enough that people need actual help navigating it—not just more dashboards to decipher. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  6. قبل يوم واحد

    Business Insider Tells Journalists They Can Use AI to Write First Drafts (And Won’t Tell Readers)

    Look, I know we’ve all been waiting for this moment—the first major news outlet to officially say “yeah, just let ChatGPT write your stories.” Well, it’s happened (and honestly, I’m surprised it took this long). Business Insider just told its journalists they can use AI to create first drafts of articles, and here’s the kicker: they probably won’t tell readers when they do it. According to reporting from Status, the policy was rolled out in an internal memo last week that treats AI “like any other tool” for research, image editing, and yes—writing initial drafts. Thing is, this feels like crossing a pretty significant line in journalism. The memo’s FAQ reportedly addresses the elephant in the room directly: “Can I use AI to write first drafts?” Answer: “Yes, but you must make sure your final work is yours.” (Translation: the byline is still your responsibility, even if the first thousand words came from a chatbot.) What’s wild is the transparency approach—or lack thereof. While entirely AI-generated content would get disclaimers, articles where AI helped with the initial draft apparently won’t. The reasoning seems to be that if a human journalist rewrites and takes responsibility for the final product, it’s still “their” work. Here’s the framework for understanding why this matters: journalism has always used tools to help with the process—spell check, research databases, even voice transcription software. But those tools don’t generate the actual ideas, structure, or initial language. This is different. We’re talking about AI potentially providing the skeleton (or more) that journalists then flesh out. The irony wasn’t lost on me that Business Insider knows firsthand how this can go wrong—they published AI-generated stories from a supposed freelancer this summer without realizing it. Now they’re essentially institutionalizing a version of that process, just with more human oversight. To their credit, Business Insider has been aggressively embracing AI across their business. They appointed an AI newsroom lead, built AI search tools, and parent company Axel Springer has licensing deals with OpenAI and Microsoft. This policy feels like the logical next step in that strategy. The real question isn’t whether journalists will use AI (they already are, whether officially sanctioned or not), but whether readers deserve to know when they do. Business Insider seems to be betting that as long as a human takes final responsibility, the process doesn’t matter. We’re about to find out if their audience agrees. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  7. قبل يومين

    Microsoft Just Quietly Admitted Claude Beats GPT-5 for Coding

    Look, I know another AI model announcement sounds boring, but this one’s actually wild. Microsoft just rolled out automatic model selection for Visual Studio Code, and guess what? For paid GitHub Copilot users, it will “primarily rely on Claude Sonnet 4.” Not GPT-5. Not their own partner’s latest model. Claude. This is Microsoft—the company that’s invested $13 billion in OpenAI—basically saying “yeah, Anthropic’s model is better for the thing developers actually care about most: writing code.” The quiet part is now loud. Here’s the framework for understanding why this matters: Visual Studio Code is probably the most popular code editor on the planet (used by something like 70% of developers). When Microsoft chooses which AI model to default to for millions of coding sessions daily, that’s not a casual decision. That’s a data-driven admission about which model actually performs. The really interesting bit? Sources familiar with Microsoft’s developer plans tell The Verge that the company has been “instructing its own developers to use Claude Sonnet 4 in recent months.” So Microsoft’s own engineers were already jumping ship from GPT-5 for their day-to-day work. Think about the awkwardness here. Microsoft has to maintain its public partnership with OpenAI while privately acknowledging that Anthropic is eating their lunch in one of the most important AI use cases. It’s like Apple partnering with Google for search while quietly admitting Bing is better (which, thankfully for everyone, has never happened). For developers, this is actually great news. The auto-selection feature will pick between Claude Sonnet 4, GPT-5, GPT-5 mini and other models for “optimal performance,” meaning you get the best tool for the job without having to think about it. Free users get the automatic switching, paid users get Claude as the primary choice. But the broader implication is huge: we’re seeing the first major cracks in the OpenAI-Microsoft dominance narrative. When your biggest partner starts defaulting to your competitor for their flagship developer tool, that’s not just a product decision—that’s a market signal. The coding AI race is far from over, and apparently, it’s not going the way everyone expected. Turns out the company that talks less about AGI and more about safety might have just built the better coding assistant. Who would have thought? Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  8. قبل يومين

    USA Today’s AI Chatbot Is Either Brilliant or Completely Unhinged (Probably Both)

    Look, I know what you’re thinking: “Another media company launches an AI chatbot, how groundbreaking.” But USA Today’s new DeeperDive tool is actually doing something interesting here (and slightly unhinged, which makes it even better). Instead of just slapping ChatGPT into their website and calling it a day, USA Today built a chatbot that actively converses with readers about their stories. Think of it as that friend who read the entire article while you skimmed the headline, except this friend has access to the entire internet and never gets tired of explaining why cryptocurrency regulations matter to your daily life. Here’s what’s wild: DeeperDive doesn’t just answer questions about articles—it proactively engages readers in discussions about the content. According to reports, the tool can break down complex stories, provide additional context, and even challenge readers’ assumptions (politely, one assumes). It’s like having a really well-informed debate partner who’s read everything and remembers it all. The timing here is fascinating. While news organizations are getting sued left and right over AI companies scraping their content (looking at you, Google AI Overviews lawsuit), USA Today is essentially saying “Fine, we’ll build our own AI and use it to make our content MORE valuable, not less.” It’s a “can’t beat them, join them but do it better” approach that actually makes sense. Thing is, this could represent a fundamental shift in how we consume news. Instead of reading an article and moving on, DeeperDive wants to turn every story into a learning experience. Imagine reading about, say, a new climate policy and having an AI immediately available to explain the economic implications, historical context, and potential counterarguments. That’s not just news consumption—that’s news education. The real test will be whether people actually want this level of engagement with their news (spoiler: some absolutely will, others will run screaming). But if USA Today can pull this off—creating an AI that enhances rather than replaces traditional journalism—they might just have figured out how to make AI work FOR media companies instead of against them. Early reports suggest the tool is still finding its voice (aren’t we all), but the concept is solid. We’re potentially looking at the future of informed citizenship: AI that doesn’t just give you information, but helps you understand why it matters and what you should do with it. Read more from Will Knight at WIRED Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

حول

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

قد يعجبك أيضًا