Unsupervised Ai News

Limited Edition Jonathan

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

  1. HÁ 3 H

    Former Spotify Execs Launch AI Learning Platform That Actually Makes You Smarter

    Here’s a refreshing take in the age of AI anxiety: What if artificial intelligence could actually make us better learners instead of lazy consumers? That’s the bet behind Oboe, a new AI-powered education platform that launched this month from the creators of Anchor (you know, the podcast platform Spotify bought for $150 million). The premise is beautifully simple: tell Oboe what you’re curious about, and it’ll craft a personalized course just for you. Want to understand quantum computing? Curious about Renaissance art techniques? Need to wrap your head around cryptocurrency? Oboe’s AI will build you a learning path tailored to your existing knowledge and learning style. Nir Zicherman, Oboe’s CEO and co-founder, ended up running Spotify’s audiobooks vertical after the Anchor acquisition, while his co-founder Mike Mignano brings podcast platform expertise to the table. They’re taking a decidedly optimistic stance in their marketing: “Is AI going to make us all stupid? Are we going to forget how to think for ourselves?” The answer, they argue, is a resounding no – if we use AI as a learning partner rather than a replacement for thinking. The timing feels right (and necessary). Traditional online courses often suffer from the one-size-fits-all problem – they’re either too basic for some learners or too advanced for others. Oboe’s AI can theoretically adjust on the fly, creating that sweet spot where you’re challenged but not overwhelmed. It’s the kind of personalization that human tutors provide, but accessible to anyone with internet access. What’s particularly clever is how this flips the script on AI education fears. Instead of worrying about students using ChatGPT to cheat on essays, we’re looking at AI that actively encourages deeper engagement with material. The platform isn’t doing the learning for you – it’s optimizing the path to help you learn more effectively. This represents a broader shift we’re seeing in AI applications: moving beyond simple automation to augmentation that makes humans more capable. The former Spotify team clearly understands engagement (podcasting taught them that), and now they’re applying those lessons to the much trickier challenge of sustained learning. Whether Oboe can deliver on this promise remains to be seen (we’re still in early days), but the approach signals something important: the most interesting AI companies aren’t trying to replace human capabilities – they’re trying to amplify them. And in a world drowning in information, that kind of intelligent curation might be exactly what we need. Read more from The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  2. HÁ 5 H

    Google’s Gemini sidekick wants to help you git gud at mobile gaming

    Look, I know another AI assistant announcement sounds like we’re stuck in some kind of algorithmic Groundhog Day (because apparently every company needs their own copilot now), but Google’s latest move with Gemini actually makes sense for once. The company just unveiled Play Games Sidekick, which embeds Gemini Live directly into mobile games as an in-game overlay that can see your screen and offer real-time advice. Here’s what’s wild: instead of waiting for you to alt-tab out and Google “how to beat level 47” like some kind of caveman, Sidekick lets you just ask Gemini while you’re playing. The AI can literally see what’s happening on your screen (thanks to Gemini’s screen-sharing capabilities), so you can point at stuff and say “this” or “that thing over there” and it’ll know what you mean. During Google’s demo with The Battle of Polytopia, Gemini offered specific strategic advice and even cracked game-specific jokes that were… well, let’s just say the AI’s comedy career isn’t taking off anytime soon. Thing is, this actually feels like a natural evolution of how people already use AI for gaming help. Instead of frantically typing questions into ChatGPT while your character dies, you get contextual assistance without breaking flow. It won’t replace those deep-dive strategy guides written by humans who’ve spent 400 hours min-maxing everything, but for quick “wait, what do I do now?” moments, it’s genuinely useful. The rollout is refreshingly restrained for a Google AI launch (shocking, I know). Sidekick will only appear “in select games over the coming months” from partners like EA and NetMarble, including titles like Star Wars Galaxy of Heroes and FC Mobile. You can dismiss the overlay entirely if you want, and the Gemini features require you to actively engage—no AI constantly chattering unsolicited advice while you’re trying to concentrate. Google’s clearly taking a page from Microsoft’s Gaming Copilot playbook here, but the implementation feels more thoughtful. The overlay includes other useful stuff too: quick screenshot tools, recording shortcuts, achievement tracking, and a direct YouTube live streaming button. It’s positioning itself as your gaming command center that happens to have an AI assistant built in, rather than an AI assistant that awkwardly inserted itself into gaming. This ties into Google’s broader push to make Play Store more than just “the place you download apps.” They’re adding a new “You” tab that combines gaming profiles, content recommendations, and deals into a unified hub. Plus, Google Play Games is finally coming out of beta on PC after three years, suggesting the company is serious about creating a proper gaming ecosystem rather than just throwing features at the wall. The most encouraging part? Google seems to understand that AI assistance works best when it’s contextual and optional. Sidekick only activates when you want it, provides specific help based on what you’re actually doing, and stays out of your way otherwise. After years of companies cramming AI into every possible interface whether it makes sense or not, this feels like someone finally asked “what would actually be helpful?” Sources: The Verge and Engadget Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  3. HÁ 9 H

    AI Models Are Training on Retracted Papers (And That’s Actually Fixable)

    Here’s a problem that sounds worse than it probably is, but still deserves our attention: AI chatbots are answering scientific questions using information from papers that have been retracted for being, well, wrong. MIT Technology Review confirmed recent studies showing that some AI models rely on flawed research when you ask them science questions. The discovery came from researchers who basically played “gotcha” with various AI systems, asking questions that could be answered using retracted papers to see what happened. Thing is, this isn’t quite the research integrity apocalypse it might sound like. Most scientific databases don’t just delete retracted papers entirely (they need to stay up so people can see what was wrong), and training datasets for AI models often scrape from these same sources. So when GPT-4 or Claude gives you information that matches a retracted study, it’s not necessarily because the AI is maliciously spreading misinformation—it’s because the training process doesn’t distinguish between “published” and “published but later found to be problematic.” The researchers found this by testing questions where retracted papers would give different answers than the current scientific consensus. For example, if a retracted paper claimed X causes Y, but subsequent research showed it doesn’t, they’d ask the AI about that relationship and see which answer emerged. What’s actually encouraging here is that this is a solvable technical problem, not some fundamental flaw in how AI works. Companies could filter their training datasets to exclude retracted papers, or weight them differently. Some are already starting to do this—it just requires being intentional about it instead of treating all text as equally valid. The bigger issue this reveals is about how we think about AI as a research tool. If you’re a scientist (or anyone, really) using ChatGPT to help with research questions, you probably shouldn’t be taking its answers as gospel anyway. But knowing that retracted papers might be lurking in there is useful context for calibrating how much you trust these systems for scientific information. Look, every new technology goes through this phase where we discover its weird failure modes and then figure out how to patch them. Email spam, web misinformation, social media manipulation—we develop better filters and verification systems over time. This feels like the same pattern playing out with AI and scientific literature. The encouraging part? The fact that researchers are actively hunting for these problems and AI companies are generally responsive to fixing them when they’re identified. That’s the kind of iterative improvement process that actually works. Read more from MIT Technology Review Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  4. HÁ 18 H

    Doom = Dollars: How a Middle School Dropout Built a $50M AI Apocalypse Empire

    There’s a guy named Eliezer Yudkowsky who dropped out of school in seventh grade, never graduated high school, holds zero degrees in computer science or AI or literally anything, and has somehow convinced people to give him over $50 million to “save humanity” from artificial intelligence. His organization, MIRI, has been operating for over 20 years and has published almost nothing in peer-reviewed journals that mainstream AI researchers actually cite. And now? He’s got a book coming out called “If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.” It’s the kind of thing you’d expect to see at a gas station checkout counter between “Bigfoot Stole My Wife” and “Aliens Control Our Government.” But this isn’t some tabloid nonsense—this is a guy who testified before the U.S. Senate and got listed in TIME Magazine’s “100 Most Influential People in AI.” Thing is, this isn’t really about AI safety at all. This is about something much more interesting (and profitable): how complete incompetence can monetize anxiety when it’s packaged with the right mix of intellectual-sounding jargon and apocalyptic certainty. I first encountered Yudkowsky and his colleague Nate Soares on Sam Harris’s podcast—episode 434, “Can We Survive AI?” And honestly? I’m still processing my disappointment that Harris, someone whose epistemological rigor I’ve respected for years, would completely abandon his usual critical thinking standards to platform these guys. Harris typically grills guests on their credentials and evidence. But somehow, when it comes to AI doom, he just… didn’t. It was like watching a skeptic suddenly decide to interview psychics and take their arguments seriously. The Eternal Grifter’s Formula Yudkowsky didn’t invent this playbook. He’s just the latest in a long line of self-appointed prophets who figured out that selling fear pays way better than actually solving problems. Every generation has them. In the 1980s, it was evangelical leaders convincing America that Satan worshippers were running daycare centers. In the 2000s, it was alternative medicine gurus selling supplements to protect you from “Big Pharma.” Now it’s AI doom. The pattern is always the same: Step 1: Find or Create an Existential Threat. Target something most people don’t understand but sounds plausibly dangerous. Mike Warnke figured this out in the 1970s when he started claiming he was a former Satanic high priest who escaped a massive underground cult network. Yudkowsky just swapped out demons for digital superintelligence. Step 2: Position Yourself as the Brave Truth-Teller. Discredit actual experts. “Mainstream scientists are bought and paid for!” or “Academic AI researchers don’t understand the real risks!” You don’t need actual credentials—you just need to convince people that traditional credentials are part of the conspiracy. Step 3: Build the Money Machine. The sales funnel is always the same: free content to build an audience, then monetize their anxiety through increasingly expensive products and services. Warnke’s progression: Free church talks → $500 speaking fees → $50 books → $1,500 weekend seminars → ongoing “consultation.” At his peak, he was pulling in $1-2 million annually just by telling scary stories about his fictional cult days. Yudkowsky’s version? Free blog posts on LessWrong → Harry Potter fanfiction (seriously, 661,000 words of wish-fulfillment fiction featuring a “brilliant” autodidact protagonist) to recruit followers → MIRI donations → speaking fees → book deals. Get thousands of tech workers emotionally invested in your worldview, then pivot them to your AI doom theories. Following the Money Trail What’s wild is that nearly all the most extreme “AI doom” funding comes from cryptocurrency speculation profits. Sam Bankman-Fried’s embezzled customer funds didn’t just disappear into personal real estate—they propped up the entire “effective altruism” ecosystem that amplifies AI doom messaging. When your movement’s biggest financial backer turns out to be running a multi-billion dollar fraud operation, maybe that says something about your due diligence standards. Then there’s the anonymous donations. That $15.6 million MakerDAO donation in 2021? It came right after MIRI’s “Death with Dignity” messaging hit peak hysteria. The incentive structure is obvious: more paranoid predictions equal more funding from anxious crypto millionaires. Meanwhile, MIRI’s academic footprint is basically nonexistent. Open Philanthropy’s brutal 2016 review found MIRI’s total research output comparable to “an intelligent but unsupervised graduate student over 1-3 years.” Since 2018, MIRI implemented a “nondisclosed-by-default” research policy, which is academic speak for “we don’t publish anything because we don’t want people to see how little we actually produce.” The Strategic Retreat The smoking gun? MIRI’s 2024 mission update explicitly abandons research for “advocacy.” They admit their research approach “largely failed.” Translation: “We can’t do real research, so now we’re just going to lobby politicians instead.” When your $50+ million AI safety organization admits its research approach failed and pivots to pure advocacy, that tells you everything about whether they were ever serious about solving technical problems. Real research institutes have peer review, university affiliations, government grants, and mainstream citations. MIRI’s model? Private crypto funding, secret research, self-published papers, and circular citations. The beautiful asymmetry that makes this business model so profitable: competent people are constrained by facts, evidence, and professional ethics. Incompetent people are free to sell whatever nightmare pays best. As Bertrand Russell nailed it: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.” In ten years, “If Anyone Builds It, Everyone Dies” will sit in discount bins next to books about Y2K computer disasters and 2012 Mayan calendar prophecies. The grifters will find new fears to monetize, and the cycle will begin again. But hey, at least we’ll have documented how a middle school dropout built a $50 million empire by convincing smart people that robots are coming to kill us all. The antidote to fear merchants has always been the same: demand evidence, check credentials, follow the money, and remember that people selling protection from the apocalypse have a financial incentive to keep you believing the apocalypse is coming. Attribution: Originally published on Infinite Possibilities Daily Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  5. HÁ 1 DIA

    Record Labels Claim AI Music Generator Suno Illegally Ripped Songs From YouTube

    Well, this escalated quickly. The RIAA has just dropped an amended lawsuit against AI music generator Suno that’s way more specific than your typical “you used our copyrighted content” complaint. They’re now alleging that Suno actively “stream ripped” tracks from YouTube — basically running code to circumvent YouTube’s encryption and turn streaming content into downloadable files for training data. The updated complaint (filed September 19th) gets technical about how Suno allegedly bypassed YouTube’s “rolling cipher” encryption, which violates both YouTube’s terms of service and the anti-circumvention provisions of the DMCA. This isn’t just about fair use anymore — it’s about whether Suno broke digital locks to get the content in the first place. Here’s what makes this interesting from a tech perspective: Section 1201 of the DMCA is the provision that makes it illegal to circumvent “technological measures that effectively control access” to copyrighted works. We’ve seen this law used for everything from phone unlocking to McDonald’s ice cream machine repairs (seriously), but this case applies it closer to its original purpose — preventing actual piracy tools. The thing is, Suno has been pretty vague about where its training data came from, which is fairly standard in the AI world but looks suspicious when you’re being sued. They’ve leaned on fair use arguments, claiming that training on copyrighted material is legally protected. At least one court has backed this position recently, but it’s not settled law by any means. What’s particularly damaging here is that the RIAA isn’t just saying “you used our music” — they’re saying “you broke our locks to steal our music.” That’s much harder to defend under fair use. The complaint now targets research from the ICMP publishers group suggesting Suno sourced its training data through circumvention techniques. The stakes are real: the RIAA wants $2,500 for each act of circumvention plus up to $150,000 per work infringed. When you’re talking about “decades worth of the world’s most popular sound recordings” feeding into AI models, those numbers add up fast. This case matters beyond just Suno because it’s testing where the line gets drawn between legitimate AI training and outright piracy. Fair use might protect some AI training practices, but it definitely doesn’t protect breaking encryption to access content. The industry is watching to see whether “we needed the data for AI training” becomes the new “we were just backing up our CDs.” Sources: The Verge Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  6. HÁ 1 DIA

    DeepSeek Just Made $100 Million AI Training Look Embarrassingly Expensive

    Here’s a number that should make every AI lab executive sweat: DeepSeek claims their new R1 reasoning model cost just $5.5 million to train. Not $55 million. Not $500 million. Five and a half million dollars. The Chinese AI company just dropped a technical paper detailing R1’s development costs, and if these numbers are real (and that’s a big if), we might be witnessing the kind of efficiency breakthrough that completely reshuffles the AI power dynamics. For context, most frontier models are rumored to cost anywhere from $100 million to over $1 billion to train. OpenAI’s o1, DeepSeek’s closest competitor, likely cost hundreds of millions. Thing is, R1 isn’t some stripped-down model either. It’s performing competitively with OpenAI’s o1 on reasoning benchmarks, handling complex mathematical problems and coding challenges that would stump most systems. Yet DeepSeek says they pulled this off with roughly 1/20th the typical budget. The secret sauce appears to be a combination of more efficient training techniques and what DeepSeek calls “knowledge distillation” – essentially teaching a smaller model to mimic the reasoning patterns of larger ones. They’re also leveraging reinforcement learning in smarter ways, focusing the model’s learning on the types of reasoning that matter most rather than trying to teach it everything at once. (Look, I know we’ve heard “more efficient training” promises before, but these aren’t just claims – they’re backed by detailed technical documentation and actual model performance.) If DeepSeek’s numbers hold up under scrutiny, this could trigger another wave of AI investor panic similar to what we saw in January when the company’s previous models performed surprisingly well at a fraction of expected costs. The narrative that you need Google or OpenAI-sized budgets to compete in frontier AI suddenly looks a lot less convincing. The broader implications are wild to think about. Democratized AI development, more players in the game, faster iteration cycles when each experiment doesn’t cost a small country’s GDP. We could be looking at an acceleration in AI progress driven not by bigger budgets, but by smarter approaches. Of course, there are still questions. DeepSeek’s cost calculations might not include all the infrastructure and research overhead that Western companies factor in. And training cost is just one piece – there’s inference, scaling, safety testing, and all the other expenses that add up. But even accounting for those factors, this level of efficiency is remarkable. For now, keep watching the benchmarks. If other labs can’t match R1’s performance-per-dollar ratio, we might be entering an era where scrappy efficiency beats brute-force spending. And that would be a fascinating plot twist in the AI development story. Read more from ZDNet Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  7. HÁ 1 DIA

    This medical startup is letting AI run the whole damn appointment (and apparently it’s working)

    Look, I know what you’re thinking. Another AI medical story where we’re supposed to get excited about chatbots playing doctor. But hold up—this one’s actually wild, and not in the “this could go horribly wrong” way I usually cover these. A Boston-based startup called Delfina is doing something that sounds completely nuts on paper: they’re letting large language models conduct entire medical appointments. Not just scheduling or note-taking (yawn), but the full diagnostic conversation with patients. And here’s the kicker—early data suggests it might actually be working better than traditional appointments. The setup is deceptively simple: patients call in, get connected to an AI system that conducts a thorough 30-minute interview about their symptoms, medical history, and concerns. The AI then generates a comprehensive report for human doctors, who review everything and make the final diagnostic decisions. Think of it as AI doing the lengthy detective work that doctors often don’t have time for. What’s fascinating (and honestly a bit surprising) is how patients are responding. Instead of the rushed 10-minute appointments we’ve all suffered through, they’re getting unhurried conversations where they can actually explain what’s wrong. The AI asks follow-up questions, explores symptoms thoroughly, and doesn’t glance at a clock or reach for the door handle. Now, before you start planning to replace your GP with ChatGPT (please don’t), there are important guardrails here. Human doctors are still making all the actual medical decisions—the AI is essentially acting as a really thorough medical interviewer and note-taker. It’s more like having a perfectly patient medical student who never gets tired of asking “tell me more about that symptom.” The timing couldn’t be better for something like this. Healthcare is facing massive staffing shortages, appointment wait times are getting ridiculous, and doctors are burning out from administrative overhead. If AI can handle the time-intensive information gathering while doctors focus on diagnosis and treatment decisions, that’s actually a pretty sensible division of labor. Thing is, this approach only works if the AI is genuinely good at medical conversations—not just spitting out WebMD-style generic responses. The real test will be whether these AI-conducted interviews consistently capture the subtle details that human doctors might catch, or if important nuances get lost in translation. We’re still early in this experiment, but it represents something more interesting than the usual “AI will revolutionize healthcare someday” promises. This is AI being deployed right now, in real medical practices, handling actual patient interactions. Whether it scales successfully could reshape how we think about the doctor-patient relationship entirely (or at least make getting an appointment slightly less impossible). Read more from Grace Huckins at MIT Technology Review Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

  8. HÁ 1 DIA

    The Universal Tool Calling Protocol Just Made AI Agents Way More Dangerous (In the Best Way)

    Look, I know another protocol announcement sounds about as exciting as watching paint dry (we get it, there’s always a new standard), but the Universal Tool Calling Protocol (UTCP) actually solves one of those infuriating problems that’s been holding back AI agents from being genuinely useful. Here’s the thing: right now, when you want an AI agent to actually *do* something beyond chat—like booking a flight, updating a spreadsheet, or controlling your smart home—it’s an absolute nightmare of wrapper servers, custom integrations, and the kind of developer friction that makes you question your life choices. Every tool needs its own special handshake, every integration requires yet another middleman server, and execution speeds crawl because everything has to bounce through multiple layers of translation. UTCP cuts through all that nonsense. It’s a lightweight protocol that lets AI agents find and call tools directly (no wrapper servers required), with built-in security and scalability that doesn’t make you nervous about what you’re unleashing on your infrastructure. Think of it like this: instead of having to learn a different language for every single appliance in your house, UTCP is like having a universal remote that actually works. The AI agent can discover what tools are available, understand how to use them, and execute commands without needing a translator for every interaction. The security angle is particularly clever—rather than trusting some random wrapper server to handle authentication and permissions, UTCP builds those protections directly into the tool calling process. It’s the difference between handing your house keys to a stranger who promises to lock up versus having a smart lock that recognizes you directly. What makes this genuinely exciting (beyond the obvious “fewer things to break” benefit) is what it enables. We’re talking about AI agents that can seamlessly move between different tools and services without the current patchwork of custom integrations. A research agent could pull data from multiple APIs, analyze it, generate a report, and then automatically update your project management system—all without developers having to build and maintain a bunch of middleware. The protocol is already being tested by early adopters who report significantly faster execution times and much simpler deployment processes. As one developer put it, “I went from spending three days building integrations to having everything working in an hour.” Sure, it’s still early days, and we’ll need to see how it scales in the wild (there’s always that one edge case that breaks everything). But UTCP represents the kind of boring-but-crucial infrastructure work that actually moves the needle on making AI agents practical for real-world applications. The broader implication? We might finally be approaching the moment where AI agents stop being impressive demos and start being genuinely useful tools that don’t require a computer science degree to deploy. Read more from MarkTechPost Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan

Sobre

AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.

Você também pode gostar de