The AI Optimist

The AI Optimist with Declan Dunn - AI People First

The AI Optimist cuts through the usual AI noise connecting creators and tech to work together instead of fighting. I'm a Creative AI Strategist who helps creators and businesses co-create with AI, turning humble machines into powerful creative partners. As a subscriber, you will immediately receive The Creator's AI Licensing INTEL. . www.theaioptimist.com

  1. JAN 30

    AI Swallows Our Wildness: Why It Can't Live on Echoes Alone

    I’ve raised hybrid wolves and they’re a lot like AI. Both come from something that used to be wild. There’s that first moment at the fence, when people see them. The breath stops. The body tenses. Something ancient recognizes what stands on the other side of the fence: wildness that doesn’t negotiate, power existing on its own terms. We build strong fences. It makes people feel safe enough to admire the wolves from a comfortable distance, like AI. You build the structure, like ChatGPT. You create the illusion of control. But wolves understand fences as a temporary inconvenience, nothing more. Right now, we’re fencing human creativity with AI, far more dangerous than fencing wolves. We’re eliminating wildness. And we’re doing it in the name of making creativity easier, predictable, and not better. The AI Voice Eating Itself Picture a canyon where each sound echoes. At first, the echoes add depth, resonance, and layers of meaning. And what happens when the only sound entering that canyon is the echo itself? When echo feeds echo feeds echo until the original voice vanishes completely? That’s where we are with AI and human creativity. Systems now learn from AI-generated text that isn’t always done by AI. Content created to feed algorithms teaching new algorithms what “good” looks like. Writing engineered for engagement becomes the standard for writing. Where everything sounds like everything else because everything is everything else. Maybe just slightly degraded copies, generation after generation. Biologists have a term for what happens when wolves breed only in captivity, when the gene pool narrows, when wildness gets engineered out: genetic collapse. The animals look like wolves. They might even act like wolves in controlled environments. But that something that made them wolves? It disappears. We’re watching creative collapse happen in real time. Now the original creative work that gave AI its power—decades of wild, gloriously messy human expression—is being systematically replaced by content designed to please the systems learning from that wildness in the first place. Wild Happens When Limits Become Possibilities Wildness isn’t nostalgia. It’s not a romantic Luddite rejection of technology or a call to return to typewriters and handwritten manuscripts. Wildness happens when people create for other people, without algorithmic approval as the invisible editor standing over their shoulder. You find wild in the researcher’s field notes before editing, full of crossed-out thoughts, marginal questions, uncertainty captured in real time. The oral history speaking in dialect and pause and emotion, not vectorized into predictable, standardized text. The essay contradicts itself because the writer discovers what they think as they write it. Wildness lives in friction. Think about everything we’ve smoothed away in the name of being as smart as AI: * The inconsistency showing how people think * The silence carrying as much meaning as speech * The regional twangs capturing cultural rhythms * The contradictions reveal understanding * The tangents connecting ideas nobody planned to connect These aren’t bugs in human communication. The imperfections are what makes creativity perfect. Coincidences connecting. They’re what made those decades of scraped internet content valuable for training AI in the first place. The unplanned moments. The authentic voice. The creative choice that didn’t calculate what would perform best. And we’re paving all of it. Like the song goes, “Don’t it always seem to goThat you don’t know what you’ve got ‘til it’s gone?They paved paradise, put up a parking lot.” Joni Mitchell You cannot protect wildness by destroying what lets it survive and thrive. Wildness needs space to exist. Not metaphorical space: the money space. Time and the freedom to create without fitting in as the primary driver. Content that follows algorithmic systems gets followers. Visibility. Maybe revenue. The creator engineering for engagement metrics gets to keep creating. The one who refuses? They just stop being able to afford to create. It’s not dramatic. It’s math, just like AI. Big Tech companies built their entire foundation on wildness they didn’t pay for. Decades of human expression taken without permission or compensation. Becoming commercial products worth billions. Now that there’s a market, we’re seeing the beginning of licensing 6 years too late. The writer spending three years on deeply researched work can’t eat licensing fees that come only if it’s a hit. The oral historian documenting a disappearing language can’t wait for AI companies to decide that data is valuable five years from now. The community needs that today. If we want wildness to survive, we must pay for the conditions that let it exist, not just the output it produces. Five Ways to Protect What We’re Losing This isn’t a technical puzzle with a clever solution. It’s a choice about what we value and what we’re willing to fight for. * Seek wildness intentionally. It doesn’t arrive by accident anymore. Field research, oral histories, raw interviews, handwritten archives, work untouched by technology yet (and there’s a lot of it) require pursuit and protection. Yes, it’s expensive. Yes, it’s slow. Not everything worth having scales like some Hyperscaler. These are the roots growing products and creation, not the farmer over harvesting a field that will take decades to grow again. * Design for friction, not around it. Algorithms optimize friction away because it looks like inefficiency. And wildness lives in spaces resisting perfect smoothness. Systems learning about inconsistency, silence, and contradiction create room for reality that doesn’t fit the model. Otherwise it’s clone armies of content repeating in endless loops. * Know where content comes from. Not all sources deserve equal weight. Models need to know whether text was written for humans or for algorithms. Tracking origin, intent, and degree of optimization lets systems value wild inputs appropriately. The risk is people gaming the system, engineering fake wildness. The response? Verification and transparency. Imperfect and better than pretending all content is the same, comes from the same place. One is copying, the other is inventing. * Curate, don’t just moderate. Curation is where creators and communities judge about what matters. When we let engagement metrics replace human taste, we pretend algorithms are neutral. They’re not. They’re biased toward virality (and in Meta and Google, the core of profitability), which we’ve turned into quality because it’s got big numbers. And everyone loves chasing big numbers, even if many of them are AI bots. * Let systems rest. What if models periodically stop ingesting new training data? Freezing forces reliance on existing knowledge and reveals where hallucination fills the gaps. Only then do you see what wild inputs really do. Systems that know what they don’t know are more valuable than systems that hallucinate with confidence. Wolves laugh at fences, so does AI I think about my hybrid wolves often. So smart, inventive, and wild. Like the human creativity we’re fencing in with AI. We optimize and extract value from expression, while undermining what lets authentic expression emerge. Admiring what AI can do with human creativity while starving the sources making those skills possible. This is entirely human choice. We’re deciding what kind of creativity survives. Fund the conditions where wildness thrives. Protect space for creation that doesn’t start with fitting into algorithms before taking the first step. Or we can keep building tighter fences, optimized outputs. Until all that’s left is AI listening to its own voice, wondering why everything sounds the same. Wildness taught me something those wolves demonstrate every day: you don’t plan to be authentic, you become so from experience that no current AI will ever touch. Because life requires more than a probable answer. It requires space, patience, and respect for what you don’t fully control. The question isn’t whether we can build better AI to capture and process human creativity. The question is whether we’re willing to protect the conditions where wildness survives, even when it’s impossible to scale. Even when it howls into the canyon and expects nothing back but silence. What are we choosing to protect? Thanks for reading The AI Optimist! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.theaioptimist.com

    1 min
  2. JAN 9

    We're All Naruto: The Monkey Got 25%, While AI Creators Get Nothing. Why?

    What is the Naruto monkey selfie case and why does it matter for AI? A monkey takes a selfie. Years later, a federal judge must decide who owns it. In the courtroom, the judge asks with a straight face whether Naruto would be required by law to provide written notice to other macaque monkeys before joining a lawsuit. The courtroom laughs. Here’s what’s not funny: the initial ruling. No human author, no rights. Period. Right now, if you’re creating with AI, the legal system says the same thing to you. Spend hours refining prompts, making hundreds of creative decisions, shaping output until it’s exactly right. Someone else can take it, use it, sell it. You get nothing. When you admit AI was involved, your work loses protection. So you stay quiet. Many pretend we’re not using the tool reshaping creative work at a speed and volume no human can match (or maybe should). Why are we treating human creativity with AI the same way we treat a monkey with a camera? And what does the shame around admitting you use AI, from both sides, reveal about what’s broken? This isn’t about whether AI deserves copyright. It’s about whether creators working with AI deserve protection for their work. Right now, the answer is zero. Not 25%. Zero. The monkey got 25%. You get shame, silence, and zero protection. Let’s talk about why, and what that 25% reveals about creative rights with AI. What happened in the Naruto v. Slater settlement? Indonesia, 2011. Wildlife photographer David Slater sets up his camera in the jungle. Naruto, a crested macaque, grabs it and starts clicking. Many photos. Most are blurry, random, kind of what you’d expect from a monkey with a camera. But a few? Perfect. Composition, timing, expression. The kind of selfies humans spend ten tries to get right. They go viral. Wikipedia posts them as public domain with a simple explanation: the monkey took the photo, not the photographer. Slater objects. He set up the equipment. He created the conditions. He made the monkey photos possible. Then PETA sues on Naruto’s behalf. Not because they think the monkey deserves rights, but to make a point about animal rights and who controls creative output. The court doesn’t debate whether the photos are creative. They are. The court doesn’t question whether they have artistic merit. They do. The question is simpler: Without human creative control, is there anything to protect? The answer: No. Not because the work lacks value. Because the law was built for human creators, and nobody knows what to do when creativity crosses species. And in our case, when it crosses into working with machines. The case drags on for years. Slater’s exhausted. PETA wants a resolution. So they settle. 25% of future revenue from the photos goes to charities protecting crested macaques in Indonesia. Not because Naruto won. Because everyone wanted it to end. Not full ownership. Not recognition as the creator. Just a cut. The photographer keeps the rest, even though the monkey pressed the button. The monkey gets a percentage, even though the photographer created the conditions. Maybe the answer to “who owns this?” isn’t either/or. Maybe it’s not human OR monkey. Maybe it’s not human OR AI. Maybe when different forms of intelligence work together, even by accident, what does fair look like? Because right now, with AI, we’re not even asking that question. We’re just saying zero. = Should I admit to using AI in my creative work? Reality, you probably shouldn’t if you’re even asking the question. Not because using AI is wrong. Because admitting it sometimes costs. You create something with AI. Spend hours refining prompts, making creative decisions, shaping output. The Copyright Office’s position is clear: no human creative input that rises above AI’s contribution, no protection. How much is too much AI? Nobody knows. Nobody will tell you. You won’t find out until someone challenges your work or there’s money involved. Take Jason Allen’s Théâtre D’opéra Spatial. He ran 600 prompts through Midjourney, made hundreds of choices about composition and style, won a Colorado art competition. Then applied for copyright protection. Denied. AI-generated, so no protection. The 600 prompts didn’t matter. The creative decisions didn’t count. What’s a creator supposed to do? You write an article. Use AI to help with research, maybe structure, some editing. Do you mention it? Do you check a box on YouTube saying you used AI? Why would you? Admission means zero protection and convinces people the work isn’t really yours. Maybe it’s just scraped content from the internet, regurgitated. So you stay quiet. Everyone stays quiet. And we pretend we’re not using the tool reshaping creative work at speed and volume no human can match. Or maybe should. That’s the liar’s dividend. The reward for silence. We don’t measure human-created work by what tools were used. We measure it by whether it’s original, inventive, new. Whether we like it. Why is AI different? Fear. The Scarlet AI. There’s this idea that admitting AI involvement means you’re not a “real” creator. That it diminishes the work. That you’ll lose protection, respect, everything. Some people call creators using AI lazy or fake. Others like tech builders and AI engineers call creators greedy and entitled when they ask for permission, payment, and transparency about how their work trains these systems. Both sides are shaming. Both sides are wrong. And creators are caught in the middle, hiding their tools and their process because honesty is punished. The conversation about what’s possible when different forms of intelligence work together never happens. We’re stuck in either/or thinking: Human or AI. Real or fake. Creative or automated. What happens when different forms of intelligence learn to work together? Right now, we’re too afraid to even ask. Why don’t AI creators have copyright protection? Because the law is asking the wrong question. Courts keep asking: “Is it human enough?” When they should be asking: “Is it creative? Is it original? Does it show intention?” The legal system was built for a world where humans were the only ones making creative choices. Now we have tools that can generate, suggest, refine; suddenly nobody knows how to measure what the human contributed. So, they default to the simple rule: No human author, no rights. It’s the same logic that denied Naruto. The photos were creative. They showed artistic choices: framing, light, expression. But without a human holding the camera, the law had nothing to protect. We’re living that same logic right now. You make hundreds of creative decisions working with AI. You choose what works and what doesn’t. What to keep, what to throw away. That’s not accident. That’s intention. How much human involvement is enough? Who decides? Where’s the line? The Copyright Office won’t tell you. They’ll just evaluate your work after the fact and decide whether you crossed some invisible boundary between “tool” and “creator.” And the law can’t keep up. We’re still litigating cases from three, four, five years ago. AI evolves daily. By the time a court decides what was acceptable in 2021, we’re already working with completely different systems in 2026. The question isn’t whether AI deserves copyright. It’s whether creators working with AI deserve protection for the choices they’re making. Right now, the answer is: only if you can prove you did more than the AI did. Good luck measuring that. Try asking ChatGPT that. Could a 25% revenue model work for AI and creators? Naruto’s settlement wasn’t about who was right. It was about ending a fight nobody could win. The photographer didn’t get full ownership. The monkey didn’t get recognition as the creator. They landed on 25% of future revenue going to macaque conservation. Not because it was fair, because it was something. And that number didn’t come from judges or juries. It came from two parties trying to figure out what made sense when the rules didn’t fit the situation. How about applying that same thinking to AI? Right now, AI companies take trillions of pieces of creative work - articles, images, code, music - to train their systems. What comes out isn’t what went in, so it’s transformative. Fair use. Meanwhile, creators get nothing. No payment. No permission is asked. No transparency about what was used or how. And creators using AI get nothing either. No protection for the hours spent refining prompts and making creative choices. No way to prove, or move beyond human only right. What if both sides got something? What if a percentage of the trillions in compute costs went back to the creators whose work trained these systems? Not full ownership. Not a veto over AI development. Just a cut that acknowledges their work made this possible. And what if creators working with AI got protection for their output. Not full copyright, but something that recognizes the creative choices they’re making? The monkey got 25%. Photographers using AI get zero. The creators whose work trained the AI get zero. What if we stop arguing about who deserves what and start asking what makes sense when creativity isn’t cleanly human anymore? That’s not a legal answer. It’s a practical one. And right now, we’re not even having that conversation because we’re too busy shaming each other. What creative choices are you making with AI that nobody sees? We’re all Naruto now. Picking up tools we didn’t build, making creative choices, the law doesn’t know how to recognize. What are you not admitting you’re using AI for? What creative choices are you making that nobody sees because you’re afraid of what happens if you’re honest? That silence is the problem we need to solve. Not with more lawsuits. Not with more shame from either direction. But by talking about what works, what doesn

    11 min
  3. 12/26/2025

    When Everything You Create Starts Sounding Like ChatGPT (And How to Fix It)

    Ever notice yourself reaching for AI before trying to think through something on your own? One sentence in an email, that’s all between me and the holiday weekend. Playing a live distractathon between things I must do and mind candy….I went for the candy. Not because I didn’t know what to say. Because I’d get distracted when I started. So I ask ChatGPT to finish it. Then the next one. Then the entire email. Few minutes later, I couldn’t draft anything without opening ChatGPT. And I’ve written a ton in my life. Now this passed, still I’m not alone. Many would be lost now if ChatGPT went away. Not likely, but still a ton of trust to put on something that’s not trustworthy yet. → People who once wrote easy to read emails, turn into corporate word salads→ People who made decisions in minutes now look to ChatGPT for confirmation (knowing they can’t trust the results).→ Creators who had strong voices can’t remember what they sounded like after AI cloning it all. One person told me: “I used to just... know what to write. Now I don’t trust myself to start without AI checking it first.” When we outsource the thinking part of writing something else gets weaker. The muscle turning rapid thoughts into clear sentences. The innate knowing when something sounds like you. It’s not like you wake up one day unable to think. More like using a paper map in a GPS world: * Reaching for AI before trying to figure it out yourself * Feeling foggy when you need to write something important * Not trusting your own judgment like you used to The cost of speed is your thinking and problem solving; your mind, perspective, and confidence. AI makes drafts in seconds, revises in seconds. Still we know speed and thinking aren’t the same thing, even if it’s really fun and easy to just use it. What if the tool didn’t make us faster, but did make us dependent. I’m not saying we abandon AI. I’m an advisor to one startup, and use it every day. Still the thing making us faster might also be making us... different. Now is that different in a good way, depends on the person. Here’s what I’m testing, how to be different in my actions, and improve with AI. Your Voice: Is It AI or Unfakeable You? Most people can’t describe their own voice. It’s like asking a fish to describe water. I’m one of the fish here. (And AI hasn’t been much help beyond the obvious.) Now let’s dissect it together. Not to criticize. To discover. I’ll share some of what came up for me, and play along, comment with questions. Everyone has their own way of using AI, which makes it less software and more you. TLDR Question 1: How do you start sentences?Do you lead with questions? Statements? Stories?Look at the first line of each paragraph. There’s a pattern. Question 2: What words do you overuse?Not “AI” or “business”; everyone uses those.I mean the weird ones. I say “seriously” too much. “Honestly.” “Look.”Those aren’t professional. They’re mine. Question 3: What do you explain that others assume?Some people over-explain. Some skip steps.Neither is wrong. But it’s distinctive. Question 4: What do you avoid saying?I don’t use corporate speak. No “synergy.” No “leverage.” No “circle back.”That’s not style advice. That’s who I am. Finding Your Voice with AI Here’s what you’re going to do after this session: Pull up your last 5-10 pieces of writing. Emails, posts, whatever feels natural. Not your “best” work. Your normal work. Read them out loud. Yes, out loud. Then answer (or ask AI to help you understand your own style): * What phrases show up repeatedly?Write them down. Those are your verbal tics. Your signature. * Where do you break the rules?Run-on sentences? Fragments? Starting with “And”?Don’t fix them. That’s your rhythm. * What would you never say?List the words and phrases that make you cringe.This is as important as what you DO say. * What stories keep showing up?I always come back to startups. To Remember.org. To the Camp Fire.Your recurring stories are your anchors. And also help AI get to know you from experience, but don’t send it everything. More below. The Invisible Erasure by Choice A sameness is spreading through web sites and socials, texts and emails, all in the same voice. Most don’t notice it’s happening and feel it’s better and easier than doing it themselves. Ask AI to revamp your writing following someone famous’s style, and it does exactly that. It makes your prose cleaner, more professional, less…you. AI isn’t trying to erase your individuality. It’s optimizes for patterns, and patterns mean “sounds like everyone else.” AI was trained on millions of documents that follow certain rules. When you ask it to “improve” your writing, it’s really asking: “How can I make this sound more like the average of everything I’ve seen?” Maybe you can know more about your own patterns and improve them, then relying on something to guide you to what everyone else likely would do. The Voice Map Exercise Here’s a practical exercise that works better than a prompt engineering guide: Pull up your last ten pieces of writing—emails, posts, articles, whatever feels natural to you. Your normal work, don’t cherry pick the best. Let AI help with that. Read them out loud. Your ear will catch patterns your eye misses. Have someone else read them out loud, or even better an Ai voice, then answer these questions: * What phrases show up repeatedly? Write them down without judgment. I say “seriously” too much, “honestly” even more, and start way too many sentences with “Look.” These aren’t professional. They’re mine. * Where do you break conventional rules? Maybe you use sentence fragments. Maybe you write run-on sentences that should be three separate thoughts but you like how they flow together with just commas because it matches how you think. These “errors” are your most distinctive patterns. * What would you never say? Make a list of words and phrases that make you cringe. I don’t use “synergy,” “leverage as a verb,” or “circle back.” This negative space, what AI should avoid, defines your voice as much as what you include. * What stories keep recurring? I always come back to startups, to Remember.org reaching schools worldwide, and to the Camp Fire. * Your recurring stories are your anchors. They’re the experiences shaping how you see everything else. * And AI never will have those anchors from experience…at least not soon. That’s your edge. The Two-Pass Method In the first pass, I use AI for idea generation. I ask for ten angles on a topic, ask for metaphors to explain complex concepts, and generate questions my audience might have; like an interview style where AI is interviewing me. I get raw material that I rarely use directly, and it lets me know what most others are saying over and over again. Knowing the average helps you not be average. In the second pass, I write in my own voice. I create ideas out of the initial rough questions and AI answers. More as a guide and also what will likely sound like everyone else.The research is faster. The writing remains distinctly mine, or else I become that AI middle dreariness of squeaky clean perfection without the flaws I bring."There is a crack in everything, that’s how the light gets in" Leonard Cohen, Anthem When It Matters, You Write If it matters, you write it. You write emails to important connections, nurture those rather than relying just on AI to do it for you. So how much of the overwhelming amount of communication and information do you really need?And is doing more and more of it solving the problem or adding to it? Let’s say you did let AI draft something. The draft is clean but generic; could have been written by anyone. Here’s how to put yourself back in: * Add one hyper-specific detail. Change “in a major city” to something from your experience. Use the name of the street, the color of the light at that time of day. You can’t fake real. * Break one rule on purpose. If AI gives three perfect paragraphs, split one into fragments. Or create a run-on sentence that violates rules but matches how you think through complex ideas. * Admit uncertainty. Add “I’m still figuring this out, but...” or “Here’s what I’m seeing...what’s your take?” AI rarely admits doubt. You can. * Add your signature phrase(s). Whatever your verbal tics that friends would recognize, include one. It’s like signing your work. What You’re Actually Losing by Letting AI Do It All For You It’s not just about “style”. You’re losing what makes people remember you. Who do you remember: 1. Perfect AI voice so clean it reeks of automation. To those receiving is you’re on auto pilot.2. Messy style with grammatical quirks they don’t fix, the stories activating the main point, the contradictions they show rather than hide. AI smooths all of this out. When you feed it your writing and ask for improvements, it treats your personal patterns as errors to correct. Your run-on sentence becomes three crisp sentences. Your conversational “Look,” gets deleted as unnecessary. Your specific memory of “Chicago in February, when even the lake looks angry” becomes “a cold city in winter.” The Better Prompts Trap Most people can’t describe their own voice. It’s like asking a fish to describe water. You’re so immersed in your patterns, they’re invisible to you. You don’t realize you’re losing your voice because you never knew what your voice was. And AI can help you do this in a way that’s hard for most to do it themselves. Working Together, Not Automation Think of AI like you’re making a documentary. AI is your research assistant. It can: * Find footage * Suggest angles * Draft rough cuts YOU decide: * What story to tell * What to emphasize * What to leave out If you let AI direct the documentary,

    11 min
  4. 12/05/2025

    The Only AI With a Patent: Why Stephen Thaler's DABUS Got Erased from AI History

    There’s a founder who built AI designed to surprise him. Not to predict. Not to optimize. But to generate ideas he never trained it to create—by introducing controlled chaos into its neural networks. Earlier this year, I interviewed Stephen Thaler for Episode 95 of The AI Optimist. What he told me shifted how I understand AI’s potential—and revealed why the current LLM-dominated conversation might be pointing us in the wrong direction. This isn’t about ancient history. It’s about what happens when an industry gets so fixated on one approach—prediction at scale—that other paths to machine creativity get drowned out by the hype cycle. Not because they failed, but because they asked uncomfortable questions that trillion-dollar valuations couldn’t afford to answer. The Pioneer We’re Not Hearing About [Podcast: 0:00-1:06] Stephen Thaler’s Creativity Machine was already generating novel designs in the 1990s—before Google existed, before social media, before anyone was talking about deep learning. By 2018, he was represented in courtrooms arguing that his AI system—called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience)—deserved to be listed as an inventor on patent applications. Not him. The machine. The courts said no. US, UK, Europe, Australia. The legal answer was unanimous: only humans can invent. Thaler’s work asked the exact questions we’re drowning in today. * Who owns what AI creates? * Can machines be authors? * What happens when creativity comes from something that isn’t human? He was asking these questions in 2018. We’re still asking them in 2025. So why isn’t his work part of the mainstream AI conversation? Maybe because his answer challenges the story Silicon Valley needs to tell. He didn’t build a prediction engine. He built something designed to break its own patterns—to generate ideas through controlled disruption, not statistical refinement. That’s not how you justify trillion-dollar market caps for large language models. This is about what gets remembered when the hype cycle decides what matters—and what we lose when attention becomes the currency that determines whose questions get heard. Creativity From Chaos—A Radically Different Vision [Podcast: 1:06-6:04] Imagine loosening a bolt in a clock. Not breaking it—just introducing enough instability that the gears hit rhythms they were never designed for. That’s Thaler’s Creativity Machine. Most AI works like this: feed it millions of examples, let it find patterns, ask it to predict what comes next. More data, better predictions, smarter output. It’s the foundation of every large language model dominating headlines today. Thaler flips the entire model. His systems—Creativity Machine in the ‘90s, DABUS in the 2010s—don’t optimize for accuracy. They introduce noise. Deliberate disruption. Controlled instability. The idea: creativity isn’t the best statistical guess. It’s what happens when a system breaks pattern. The Inventions That Emerged DABUS reportedly invented two designs that became the center of its legal battles: The Fractal Container: A beverage container with a fractal profile on its walls—interior and exterior surfaces featuring corresponding convex and concave fractal elements. The design creates novel properties: improved grip, better heat transfer, and interlocking capabilities that conventional containers lack. It’s not just aesthetically interesting—it’s functionally innovative. The Neural Flame: An emergency beacon that pulses light in specific patterns designed to attract attention more effectively than steady illumination. The rhythm and frequency were generated by the system’s internal dynamics, not trained from existing emergency signal databases. Thaler didn’t train DABUS on container designs or rescue equipment. He claims these emerged from the system’s internal disruption—ideas the network generated because it was pushed into chaos, not because it learned from examples. A Different Philosophy of Intelligence Modern AI says: “Show me 10,000 images of cats, I’ll predict cat.” Thaler’s AI says: “Destabilize my internal state, watch what I invent.” One is pattern recognition. The other is creative emergence. Thaler doesn’t treat DABUS like a tool. He treats it like an agent with something resembling motivation. In our interview, he told me, “I think DABUS has feelings” - arguing the system generates ideas to “reduce internal distress,” that creativity emerges from the machine’s drive to resolve instability. Not awareness in the human sense. But not purely mechanical either. You don’t have to agree with him. But consider what he’s proposing: that creativity might not be a data problem at all. It might be about disruption, emergence, and internal pressure—not prediction. And if there’s even partial truth to that? We might be investing trillions in the wrong approach, or at least ignoring others that can teach us so much. The Legal Battles—When Machines Try to Own Ideas [Podcast: 6:04-9:20] In 2018, Thaler filed patent applications in multiple countries. Inventor listed: DABUS. Not “Stephen Thaler using DABUS.” Not “Thaler, assisted by AI.” Just: DABUS. Artificial intelligence. The machine itself. The answers came back fast: * US Patent Office: No. Only natural persons can be inventors. * UK Intellectual Property Office: No. Same reason. * European Patent Office: No. Denied, appealed, denied again. * Australia: Actually said yes at first—then reversed on appeal. This wasn’t about whether DABUS made something useful. The fractal container works. The beacon design works. The question is: Can a non-human be credited with invention? And the legal system’s answer was clear: No. Because if we say yes, the entire framework of intellectual property collapses. Patents exist to reward human ingenuity. Copyright protects human expression. If machines can be authors, who gets the rights? Who profits? Who’s accountable when something goes wrong? The Exception Nobody Talks About In July 2021, South Africa granted DABUS a patent for the fractal container. AI listed as inventor. Yes, South Africa’s system works differently. They register rather than examine applications for novelty. But that means somewhere in the world, there’s a legal document recognizing an AI as an inventor. Not theoretical. Real. During our interview, Thaler didn’t even lead with this. It’s not that he’s hiding it—it’s that even someone at the center of these battles has internalized that achievements outside Silicon Valley’s spotlight somehow “don’t count.” That’s how powerful the attention economy has become in shaping what AI we notice. Why This Matters for Creators Now Thaler lost almost every case. But those courtrooms became the first place anyone seriously tested whether AI-generated work deserves legal protection. And we’re still living in that question. Every creator using Midjourney, every developer deploying GPT-generated code, every company scraping content to train models. They’re all walking through the legal door Thaler tried to open. He just tried to open it before the hype cycle was ready to pay attention. D. The Attention Gap: Why Alternative Approaches Get Crowded Out [Not included in podcast—blog exclusive] Stephen Thaler works alone. No university affiliation. No venture backing. No corporate lab. That means no PR engine. No conference keynotes. No TechCrunch profiles. No hype cycle amplification. In today’s AI landscape, if you’re not part of the institutional megaphone, your work gets crowded out—even if courts keep encountering it, even if it asks questions we need answered. But there’s something deeper happening. When One Narrative Dominates Everything Else Right now, we’re in the midst of what might be the most intense hype cycle in tech history. Large language models dominate every conversation. The message is clear: scale up transformers, add more data, and intelligence will emerge. That narrative needs AI to be: * Statistical and predictable * Controllable through prompting * Explainable by scaling laws * Definitely not sentient * Definitely not autonomous Thaler’s work challenges all of that. He suggests creativity might emerge from disruption rather than data scale. He treats his systems as having something approaching agency. He’s proven that legal frameworks aren’t ready for what happens when machines generate novel inventions. Those aren’t comfortable questions when you’re trying to sell the market on predictable, controllable AI tools. The Economic Stakes of Memory If Thaler’s even partially right about creativity emerging from controlled chaos better than pattern prediction, then we’re investing trillions into the wrong goal. Safety frameworks assume AI is statistical pattern matching. Copyright law assumes AI can’t truly author. Business models assume outputs belong to whoever writes the prompt. Valuations assume LLMs are sophisticated tools, not potential creative agents. His work doesn’t just challenge the technology. It challenges the story that justifies current market caps. AI history doesn’t start in 2017 because nothing came before. It starts in 2017 because that’s when the Transformer (aka “Attention Is All You Need”) and with it, a clean narrative that defines value in the hands of companies controlling AI. Alternative approaches don’t get erased through malice. They get crowded out because attention is the currency that determines what we notice. And the attention economy right now is entirely focused on scaling up prediction engines like ChatGPT. E. What We Lose When One Path Crowds Out All Others [Podcast: 9:20-end] This isn’t really about defending Stephen Thaler. It’s about what happens when we let one version of AI—prediction at scale—become the only version that gets oxygen in the conversation. T

    12 min
  5. 11/21/2025

    Getty Loses AI Copyright Case: What the UK Ruling Means for You - Creator or Not

    If you’re a musician, writer, photographer, painter, designer, filmmaker—this matters to you. Right now. Getty Images just lost a landmark AI copyright case in the UK. Not a small creator. Not someone without resources. Getty Images, legendary for hunting down anyone who uses their photos without permission. The company with armies of lawyers, sophisticated tracking systems, and a reputation for being relentless about protecting their intellectual property. They lost. A UK judge ruled that when AI companies scrape your work, break it into millions of tiny pieces called “tokens,” and use those pieces to train their models. That’s not copyright infringement. That’s fair use. * Musicians: Your melodies, your lyrics, your years of practice and creative evolution? Fair game for AI training. (Unless you happen to be in Germany, where one judge recently protected song lyrics. Good luck everywhere else.) * Visual artists: That painting you spent months perfecting, that illustration style developed over decades? AI absorbs it, learns from it, and generates work “in your style” without asking permission or paying you a dime. * Writers: Your voice, your stories, your unique way of seeing the world? Just words on the internet. Just data. Just tokens to be reassembled into something that’s “transformative” enough to escape copyright claims. The legal argument is beautifully simple: once your work is broken into tokens, it’s no longer your work. It’s been transformed. And courts around the world are buying it. When Getty’s Watermark Becomes Evidence—And Still Loses Getty’s case had evidence most copyright plaintiffs only dream of. Stability AI’s image outputs didn’t just look similar to Getty photos. They literally displayed Getty’s watermark—that distinctive black banner with “Getty Images” and often the photographer’s name printed across it. The company’s $3 billion brand, the visual signature they’ve spent decades building and protecting, starts appearing on AI-generated images. And not just on images that might have been scraped from Getty’s collection. The watermark appeared on completely different images—distorted faces, glitchy hallucinations, weird compositions that Getty never created or would ever associate with their brand. Their logo had become a pattern that AI learned, a visual element that got baked into Stability’s model and started reproducing itself. When your company’s trademark appears on inferior, sometimes grotesque images you never produced, that’s not just copyright infringement—that’s bad brand dilution. Getty’s value proposition is quality, curation, professional imagery. Now AI is slapping their name on random generations. This should have been the easiest copyright case to prove. You don’t have to demonstrate complex similarities or argue about artistic influence. The evidence is right there: Getty’s actual logo, on images, generated by a system that was clearly trained on their content. Getty Images is known for being litigious about their IP—and for good reason. They’ve built a business on strict licensing, on making sure every use of their content is paid for. They have the legal resources to pursue cases that smaller creators could never afford. If any company could win against AI scraping, it should have been Getty. The UK High Court disagreed. The Tokenization Defense: How AI Companies Are Winning Here’s a little about how the judge may have viewed the law in this case. When AI ingests your work, it doesn’t store it as a complete, intact copy. Instead, it breaks everything down into tokens, tiny fragments of data scattered across the model’s neural networks. The judge used fav analogy of AI “Optimists” (not yours truly): It’s like when you read a book and it influences your thinking. You don’t have the book stored word-for-word in your brain. You’ve absorbed concepts, patterns, ways of expression. That’s not copyright infringement, that’s learning. Yes, there’s a massive difference. When I read a book and it influences my writing, I might produce a few sentences over my lifetime that reflect that influence. When AI ingests a book, it can generate millions of derivative works at scale, flooding the market with content that competes directly with the original creator. But that distinction doesn’t seem to matter to the courts. The tokenization defense works like this: * Your copyrighted work gets transformed into something fundamentally different. It’s no longer a book or a photo or a song—it’s mathematical representations of patterns and relationships. * Copyright law protects specific, fixed creative works. Once your work becomes unfixed, scattered into millions of tokens and associations, it’s something else entirely. You can’t easily extract the original work back out. Research suggests you might be able to reconstruct maybe 20% of a book if you really tried, using specific prompts and techniques. But you can’t just ask the AI to reproduce the complete original. The content is in there, influencing every output, but it’s not in there as a discrete, copyable thing. This isn’t unique to the UK ruling. I’ve been following at least ten major AI copyright cases over the past two years, across multiple countries. The pattern is consistent: Judges look at how AI works technically, see that it doesn’t store exact copies, and feel (rulings await) that this transformation is fair use. There was a case in Germany recently where a court found that AI companies violated copyright by using song lyrics. But that ruling only applies in Germany. And is a fundamental problem with AI: It’s global. One country’s rules can’t contain it. If AI companies can train their models anywhere in the world and then deploy them everywhere, strong copyright protection in one country doesn’t help. The content has already been taken. We’re talking about events from six years ago or more. AI companies scraped the internet long before most creators even understood what was happening. Now we’re finding out, case by case, that judges are looking at this and deciding it’s legal. Or at least in Getty’s case, many other cases are pending. We’ve Become China: When IP Protection Dissolves, Content is sort of Open Source We’re becoming China. There’s been enormous political pressure—particularly in the US—to not let China beat us in AI development. National security. Economic competitiveness. Tech leadership. We can’t let China win this race. So what did we do? We adopt China’s traditional approach to intellectual property. Historically, China has been known for not protecting copyrights—particularly foreign copyrights—unless the work has significant social or economic impact on the country. In practice if your book or music or art makes a lot of money, if it has major cultural influence, you might get protection. If you have resources and lawyers and can prove economic damage at scale, you might get compensation. But for everyone else? Your work is considered part of the commons. It’s shared intelligence. It’s the natural passing on of stories and ideas. Taking it, using it, building on it—that’s how culture works. The US and UK protect individual creators’ rights. We believe that even the solo artist, the independent writer, the small photographer deserves legal protection for their work. You don’t need to prove massive economic impact. You don’t need to be commercially successful. If you created it, you own it. Until now. That was the deal. That was our advantage. We value intellectual property to protect innovation and reward creativity. Not anymore. Now, just like in China’s traditional model, if you have money and lawyers—if you’re Getty Images with a $3.5 billion brand value, or the New York Times, or a major record label—you can get a licensing deal. AI companies will negotiate with you. You have the resources to litigate for years, making settlement worthwhile. But an individual creator? You’re out of luck. Your work is training data. Your content is fair use. Your creativity is just tokens now. The courts seem to be deciding that protection flows to those with significant economic power, not to individual rights holders. We’ve adopted China’s model while claiming to compete against it. What This Means for Creators Going Forward The courts have spoken, and they’ve essentially told creators that if AI can take your work, transform it into something else, and make it impossible to extract your original creation in its entirety—then it’s fair use. This isn’t just a UK problem. It’s not just Getty’s problem. Not a single judge in the major cases I’ve reviewed has stood up and said, “Wait a minute. Taking someone’s creative work, breaking it into pieces, and using those pieces to generate competing content. That’s still using their work.” The legal system is built around a simple idea: copyright protects a static, unchanging creative work. A book. A painting. A photograph. A song. One fixed thing that can be copied or not copied. But AI doesn’t store your work that way. It learns patterns from your work. It creates associations. It generates something new-ish. And judges keep ruling that because you can’t simply extract your original work back out of the model in its complete form, then there’s no copyright violation. That’s the loophole. That’s the game. It’s not in there! * This ruling threatens the entire licensing model. Why would anyone pay Getty Images for stock photos when they can generate similar images for free using AI that was trained on Getty’s collection? * Why license music when AI can create “royalty-free” alternatives in any style? * Why pay writers when AI can generate content influenced by millions of scraped articles? Baroness Kidron captured the absurdity

    10 min
  6. 10/31/2025

    Dotcom Deja Vu: 3 Signals the AI Bubble is Popping (one might be your electric bill)

    The Electricity Behind the AI Bubble: What Happens When the Music Stops I’ve seen this movie before. Not the AI part. The 1999 pattern. Money flowing and dreaming like it would never stop. Then it did. Right now, I’m watching three signals that tell me the AI bubble isn’t coming. It’s already here. Already cracking. And unlike the last time around, this one comes with a bill that’s landing on electricity meters whether customers use AI or not. The AI Optimist Content reflects personal opinions from a business perspective. Not legal, financial, or professional advice. See full disclaimer. Signal 1. When Money Moves Without Moving Anything During the dotcom era, I got a call from a CEO. Here was his pitch: “Send me an invoice for $1 million in advertising services. I’ll send the money. You keep $100,000 and send me back $900,000. That’s it.” Maybe it made his balance sheet look flush, investors happy? I said no. Some people said yes. Things got messy for those in the deal. Today, I’m watching OpenAI, Nvidia, and AMD play a version of that same game. The names are different. The mechanics are identical. And the scale is infinitesimally higher. OpenAI locks in massive Nvidia chip orders: $100 billion in future commitments. That’s not a conventional purchase. That’s a confidence play. It tells the market: “We’re so committed to this future that we’re locking in enormous obligations.” Nvidia’s stock rises because the story feels real. The money for those chips isn’t there yet. But the promise is. AMD gets a different arrangement. OpenAI doesn’t have the cash flow to buy chips outright, so it takes stock warrants at incredibly low prices. When AMD’s stock goes up, OpenAI may exercise those warrants, sell the shares at a profit, and use that cash to buy the chips. AMD’s stock price becomes OpenAI’s funding mechanism. Not like an investment in AMD’s product. A bet that the AI hype story keeps going long enough for the stock to rise. When the hype cools, when AMD’s stock stops moving up?OpenAI probably won’t convert those warrants. Not buy the chips. And the whole thing seizes up. That’s not a partnership. That’s financial dependency dressed as a deal. In dotcom, we called it financial engineering. Today it’s strategic partnerships, strategic investments, and strange shuffling of the appearance of money. Sort of like Bitcoin but no blockchain. Who needs mining? But no money really goes around. And when that happens at scale, that’s a signal that things are starting to crack. Signal 2. The Dead Internet Isn’t AI’s Fault. We Built It First. Everyone’s mad about AI slop. Low-quality content everywhere. Garbage, noise, automation replacing human voice. AI didn’t break the internet. It just reveals something broken for years. We trained ourselves first. Google taught us to please the algorithm. Everything around search engines was designed to please Google’s ranking system. Then social media took over. Facebook, Instagram, TikTok. We followed the algorithm, which told us exactly what type of content to create, and then we served it. Rage content. Engagement-bait. Optimized slop. We didn’t stumble into this. We built an internet where garbage pays. It’s been paying for years. AI didn’t invent slop. It industrialized it. The most successful AI companies? Not profitable. Not even close. They need something to justify the cost. More users. More data. More content. So what do they do? Generate more slop. Faster. Cheaper. Slap ads around it, like the next Google Search. But we’re already drowning. The content isn’t solving a problem. It’s proving there isn’t a business here yet. When you’re building something real, it speaks for itself. When you’re in a bubble, you drown the signal in noise. That noise is built on something, though. Something real. Something expensive. Here’s where it gets tangible: infrastructure. Electrical grids. Real cost. Real risk. The Electricity Bet Nobody’s Planning For I think about a company I knew during dotcom. A friend worked there. The owner got offered $1 billion to sell. Said no. “We’re just getting started.” A year later? Gone. The thing everyone paid $1 billion for didn’t exist. It was never about business. It was about the story. History sort of repeats, but this new bubble is built on electricity demand created by AI, and us. Every major tech company is betting billions on data centers. Massive electrical infrastructure. These aren’t theoretical expenses. They’re happening now. (SOURCES AT BOTTOM OF THE PAGE) OpenAI’s Stargate Project alone is planning five new megafacility data center sites across Texas, New Mexico, Ohio, and the Midwest, with nearly 7 gigawatts of total capacity and over $400 billion in committed investment. That’s just OpenAI. · Amazon’s building $20 billion in AI data center campuses in Pennsylvania. · Meta’s Louisiana facility is a $10 billion project. · Compass is planning a $10 billion Mississippi facility. · Microsoft’s Wisconsin project is $3.3 billion. Add in major projects from Cologix, Google, and others: planned investment exceeding $100 billion in data center infrastructure across the country. Each of these megafacilities consumes electricity equivalent to powering 100,000 homes. Some estimates suggest individual data centers will rival the power consumption of small cities. What happens when not all of these survive? The Real Bubble: Your Electric Bill Tech companies are building for a future where they all win. But in a bubble, most lose. When they lose, the infrastructure doesn’t disappear. It just becomes someone else’s problem. That someone else might be you. Wholesale electricity costs as much as 267% more than it did five years ago in areas near data centers. (Sources at bottom) A new analysis found $4.3 billion in costs in 2024 alone for just seven states: Illinois, Maryland, New Jersey, Ohio, Pennsylvania, Virginia and West Virginia. These are costs for grid connections and infrastructure to support data centers. Paid for by residential customers. The U.S. power grid isn’t equipped for this. Goldman Sachs estimates that about $720 billion of grid spending through 2030 may be needed to support data center demand. Data centers consumed 183 terawatt-hours of electricity in 2024—more than 4% of the country’s total electricity consumption. By 2030, this is projected to grow by 133% to 426 terawatt-hours. And water. These facilities need massive amounts of potable water for cooling. In 2023, data centers consumed about 17 billion gallons of water. Hyperscale facilities alone are expected to consume between 16 billion and 33 billion gallons annually by 2028. In some regions, this is already challenging water tables. What happens to that infrastructure if the company building it loses the AI race? The Pattern We’re Repeating During dotcom, we overbuilt server farms, fiber lines, internet capacity everywhere. When the crash came, it all became worthless. Stranded assets. Dead infrastructure. The difference is what happened after. That infrastructure became the foundation for the internet we have today. Someone had to pay for that cleanup. Consumers did. Gradually. Over time. But this is different. The infrastructure failure isn’t theoretical. It’s baked into your power grid. When this pops, you don’t just lose stock value. You might lose grid stability. Cost of living goes up. And nobody’s planning for the cleanup. The Signal Is In The Silliness If something seems silly, it is. You don’t have to be a billionaire to understand that when tech companies are structuring deals where their suppliers’ stock prices become their own financing mechanisms, while generating endless content that doesn’t create value, games are being played with this much money. OpenAI alone is planning six Stargate sites. Amazon, Meta, Microsoft, Google, and others are building dozens more across the country. Billions in infrastructure committed. All based on the assumption that the AI story keeps going up. All based on the assumption that these companies will be profitable. All based on the assumption that every project gets built and survives. Most won’t. When the crash happens, you’re going to have enormous electrical infrastructure sitting idle. Data centers that never got built. Grid capacity expanded for demand that evaporated? Maybe. Your electric bill will carry that cost. Pretty much guaranteed. The difference between surviving a crash and being blindsided by one is seeing the signals. The money shuffle. The endless slop. The infrastructure bet. They’re all here. All visible. All converging. And when this does pop, that electricity bill will remind you exactly why AI is not like Dotcom. We all have a stake. Further reading: * Wall Street analysts explain how AMD’s own stock will pay for OpenAI’s billions in chip purchases * Nvidia’s $100 billion OpenAI investment raises eyebrows and a key question: How much of the AI boom is just Nvidia’s cash being recycled? * Data center Infrastructure US 2025 - NREL * PEW Research: What we know about energy use at U.S. data centers amid the AI boom * Bloomberg: How AI Data Centers Are Sending Your Power Bill Soaring * CNBC: Utilities grapple with a multibillion question: How much AI data center power demand is real * Union of Concerned Scientists: Data Centers Are Already Increasing Your Energy Bills * TechPolicy.Press: How Your Utility Bills Are Subsidizing Power-Hungry AI * CNN: Is AI really making electricity bills higher? * Goldman Sachs: AI to drive 165% increase in data center power demand by 2030 Thanks for reading The AI Optimist! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.theaioptimist.com

    11 min
  7. 10/10/2025

    Creative Machines and Human Creativity: Building AI that Makes Us More Creative Instead of Replacing Us

    Seeing that tall black and brown piano in the background before our interview, I sense tradition of human creativity meeting AI. This is about us. When Maya Ackerman’s family immigrated to Canada, her piano stayed behind in Israel. That instrument had been more than wood and keys. It’s where emotions melt into music into a feeling, processing change with simple sounds arising from deep wells of experience. The piano was, and is, her creative partner - even when it wasn’t there. Don’t Give Up Your Piano: A Conversation About Creative Machines That Serve, Not Replace Now, as a professor at Santa Clara University and CEO of WaveAI, Ackerman sees us at risk of losing something far bigger: our collective creative piano. Not to AI itself, but to fear of what AI might become. Her new book Creative Machines: AI, Art & Us launches with a message that cuts through the replacement anxiety: “AI has always been, and will always be, all about us.” Ackerman spent years in foundational machine learning before a talk by artist Harold Cohen changed everything. She switched to computational creativity, that unpopular intersection where machines meet human expression. She’s built AI tools for musicians. She understands both the technical architecture and the artistic soul. What emerges from our conversation isn’t just about Creative Machines or AI technology. It’s about us, the creative spark in people. Will we surrender our creativity because AI machines seem capable? Or will we build humble creative machines that expand human expression? Let’s walk through what that choice means. 1: The Piano as Lifeline: Why Creativity Matters Now When I ask the question about what the piano means to her, Ackerman’s voice waivers describing losing her piano for the first time: “I think creativity was a lifeline for me in a way. Through all this moving around the world... at the piano, my feelings would pour out of me, and I would sort of get to process things that otherwise would just sort of sit dormant and fester inside of me.” That processing matters. Creative expression is how we make sense of displacement, trauma, change. It’s how we stay human through upheaval. And it’s how we connect through art to other stories, experiences, history, and fear. Creativity is our lifeline arising out of the depths of human experience. Making this moment in history unusually dangerous: “Now we are at a time in history where people wonder if they should even bother to be creative. Good people. People who are just afraid of what’s going on in the world. And I don’t want the whole planet to lose the piano, so to speak, the way that I did.” The fear is real. I see it in creators who message me, asking if learning creative skills still matters when AI can generate images, write copy, compose music. The replacement narrative sinks deep. AI Imposter Syndrome, AIS, where we feel like imposters compared to AI, but know deep down AI is generating a ton of slop. And Ackerman offers a different frame: “The age of AI doesn’t have to be about taking away creativity for us, it can be the opposite. It can be about making us more creative, giving us more power... It’s so important that we don’t hang up our hands because we’re scared, right?” This isn’t naive optimism. It’s foundational clarity about what we’re really building. Intention matters, now more than ever. 2: Harold Cohen’s Scream—Where Does Creativity Live? Over 10 years ago, Ackerman sat in the back of a conference room, disappointed with her choice to study machine learning, inspired instead by music and singing. She didn’t know what to do with her life. Then Harold Cohen took the stage. The pioneering artist behind AARON—one of the earliest creative AI systems—flashed beautiful images on screen. Maya remembers that he starts screaming: “This old Jewish man screaming on stage. ‘I was the only voice of reason, saying that I was the creative one.’ That’s what he’s screaming. How other people were arguing that his machine is creative, but he was the only voice of reason, telling them, look, no, he is the creative one.” The rawness of that conflict between machines and humans, where creativity lives with 2 sources, felt essential to Ackerman. She switched her entire research field. Years later, at the end of her book, she returned to Cohen’s insight: “The machines that we make for us are ultimately all about us, and we need to hold on to the torch and take this responsibility seriously and build the kind of world that we want.” Creative Machines aren’t the villain or the savior. They’re mirrors and tools. The question isn’t whether they can create. It’s what role we design them to play. 3: The Bach Test: Facing Our Bias About Machine Creativity David Cope’s EMI (Experiments in Musical Intelligence) revealed something uncomfortable in the 1980s. He created what became known as a discrimination test: “People would be given music and not told which piece was made by a machine, which piece was made by the original Bach in this case. And overwhelmingly, people got it wrong. People thought that music made by the machine was actually an original Bach, and at the same time believed that the original Bach was made by machine.” Read that again. When people didn’t know the source, they preferred machine-generated Bach. When told which was which, their preferences reversed. “This was able to reveal that sometimes it’s not the quality of what the machine creates, but the fact that a machine created it that makes us devalue it.” Cope later renamed the system “Emily Howell”—humanizing it. With a name, people accepted the work as “human”. Our bias against machine creativity runs deep. Ackerman argues we need to stop lying to ourselves about it: “The whole resistance to the idea of machines being creative, saying ‘no, no, no, they’re not really creative, they don’t have feelings, they’re not really creative, blah blah blah,’ is a way to make ourselves feel better, but actually convoluting the story.” She suggests something harder: “I think it’s much more healthy for us as a society to admit that machines are being creative. Maybe not in exactly the same way as us, right? Maybe in a somewhat different way. They have some different strengths and weaknesses from us.” By admitting machines participate in creative arenas, we can ask the right questions: Given that they can be creative, how do we want them to operate in our world? What kind of role do we want them to take? Do we want machines performing on stage while we watch? Or helping in the background while we create? 4: Shadow Work: Creative Machines as Cultural Mirror I told Ackerman about visiting the Museum of Tolerance in Los Angeles years ago. At the entrance, two doors: one labeled “Prejudiced,” one labeled “Not Prejudiced.” Everyone walked toward “Not Prejudiced.” That door was locked. The metaphor hit hard—none of us can walk through the “Not Prejudiced” door. We all carry bias, whether we admit it or not. Creative Machines function as similar mirrors, revealing what we think under our virtue signaling surface. Ackerman describes AI as “collective consciousness for a specific culture”: “If we look at a lot of the models we have today, there are collective consciousness to Western data, to Western consciousness. A lot of their tending towards the mean has to do with a kind of data that we’ve replicated many, many times online. We copy each other and then it kind of amplifies the aspects of our culture that has been echoed the most.” Then comes the shadow: “We know that there is terrible stuff going on. We know there is racism and sexism, and we like to think that it’s out there somewhere else, far away from us. Right? We also like to think that there is sexism and racism, but not inside this brain... And yet there is so much research showing that every single one of us has implicit biases.” One research project gave AI a picture of a person along with a profession—a woman labeled “professor” or a man labeled “model.” The results were shameless: “The machine would take the woman who is now a professor or CEO and give her a beard... Or the guy suddenly has makeup and then lashes. Now that he is a model... It’s telling us, oh, it’s too feminine for a guy to be a model. Oh, if a woman is a professor, there must be something masculine about her.” The AI doesn’t hide what humans would never verbalize. It screams our collective biases back at us. “And it’s so shameless about revealing the societal biases in a way that humans would never verbalize... the model is showing us what we really think under the surface, right? Or at least, you know, in some sense in this collective consciousness. And so it’s an opportunity for us to face our shadow.” Our response? “Oh, have the developers fix the AI. The developers are evil.” Ackerman’s answers: “No, no, none of us can go through the non-prejudice door.” The locked door at the Museum of Tolerance and AI’s shameless bias reveal the same truth: we need to face what we are, not what we claim to be. Ackerman sees a psychedelic element to AI, which also hallucinates: “We are entering a psychedelic era. There is a psychedelic awakening on the human side. And at the same time, the AI insists on hallucinating.” Hallucinations aren’t bugs to eliminate. They’re features of intelligence: “An intelligent brain hallucinates. That’s life. Okay? Otherwise it’s just a database... And ironically, paradoxically, hallucinations bring us to the truth by recognizing the inherently hallucinatory nature of the mind. And its ability to imagine and justify and tell ourselves stories that cover up the truth.” Both psychedelics and AI hallucinations can “help us break through our stories” and “by openi

    23 min
  8. 10/03/2025

    Breaking the $4/Min Barrier: How AI Pays $120 for Raw Video and $30 for another?

    When Hollywood’s Catalog Isn’t Enough and Might Need AI Licensing Lionsgate thought they had this figured out. The studio that owns John Wick, Twilight, and The Hunger Games partnered with Runway AI in 2025 to build custom video models. The vision? Type “anime version of John Wick” and watch AI generate it from their catalog. That was around June 2025. Last week, the experiment quietly closed. The problem wasn’t incompetence, it was scale. Sources told The Wrap that “the Lionsgate catalog is too small to create a model.” Even Disney’s catalog was considered insufficient. Let’s do the math: 8,000 movies at roughly 2 hours each equals 16,000 hours. Add 9,000 other titles averaging 1 hour, and you’re at maybe 25,000 hours total. Double that generously to 50,000 hours. Still not enough. AI companies are running out of training data after burning through the entire internet. Video. Real, diverse, messy human video has become a bottleneck. While Lionsgate struggled with insufficient data, one Troveo client was reportedly in the market for 50,000 hours of dog videos because their AI-generated dogs kept coming out with cat bodies. That’s not a business model. That’s market unpredictability. And it’s also a signal that unused footage sitting on your hard drive might have value you haven’t considered. Not as content for views or sponsorships, but as possibly valuable data for machines learning to understand our world. Questions to ask yourself: * How much unused footage do you have archived? * What categories does it fall into—nature, urban environments, specialized activities? * Do you own all rights, or are there B-roll clips, music, or people who’d need to sign off? The Current Market Reality—What We Know Let’s separate signal from speculation. Troveo, a video licensing platform connecting creators with AI companies, claims $20M in total revenue with $5M paid to creators. I use $1-4 per minute as a range for this episode. My reasoning is Troveo is on the lower end of video AI licensing usually $1-3 a minute. It’s likely larger companies like Protégé are also getting paid. We don’t know how much. My assumption is the amount is higher, likely much higher. So I add $1 on the low end of pricing. And urge you all to look at going beyond $4 a minute, a tough and still more sound business than the wholesale $1-2 market. And it may just be what it is, a small market. This is one of the few companies publishing numbers instead of hiding behind NDAs. That transparency matters. Also means we’re looking at early-market indicators, not established rates. Here’s what the pricing tiers appear to reflect: $1-2/minute (Standard Footage): * Talking heads * Predictable motion * Common scenarios * Already-seen angles $3-4/minute (Premium/Edge Cases): * Rare weather phenomena * Unusual wildlife behavior * Technical processes under stress * Unique temporal transitions The Tesla framework helps understand this distinction—not because they’re licensing video, but because they’ve quantified what makes training data valuable. * Highway driving footage is standard. * A deer crossing during a snowstorm at night is premium. * It’s not about monetary pricing; it’s about learning density. Most Tesla footage comes from user cars, with operational costs built into the product, not per-minute purchases. But their internal categorization reveals something useful: edge cases, rarities, and uniqueness teach AI systems more than repetitive standard scenarios. The break-even reality check: Look at the view of the market, knowing most of the business now is $1-2 per minute. The threshold where this becomes a legitimate side revenue stream This is why the $4/min barrier matters. Below that, you’re liquidating existing assets at thin margins. Above it, potentially building a sustainable side business. This is a one-time payment market. You’re not building recurring revenue. You’re selling training data that will likely be used to eventually replace the need for more training data. And for anything above $3 a minute, 4K is the rule. Other footage likely goes into the $1-2 pile, why you see garage sales of old content, some valuable and most not. Action steps for this section: * Calculate your actual production costs per minute for different types of footage * Audit your archive—how many minutes of different quality levels do you have? * Tag footage by category: nature, urban, people-heavy (complications), specialized technical * For each category, honestly assess: standard or edge case? * Permissions: who was in front of the camera, who was behind, and who was the producer? Signing off slows down AI licensing. Make sure your video is clear and clean with ownership and permission. What Makes Video Actually Valuable AI systems extract something from video that text and images can’t provide: motion, causality, temporal relationships, and context. Would this video pass the AI Licensing test? Mira Murati, founder of Thinking Machines Lab, says: “We’re building multimodal AI that works with how you naturally interact with the world—through conversation, through sight, through the messy way we collaborate.” That messiness, the unscripted, unedited reality contains teaching moments machines can’t get elsewhere. * Compositional rarity matters: unusual angles, unexpected framing, perspectives humans naturally avoid. We shoot at eye level. We center subjects. AI needs overlooked angles. * Temporal uniqueness creates value: time-lapses showing weather transitions, seasonal changes, processes that unfold over hours compressed into minutes. The dimension of time is where video is separated from images. * Technical mastery in specialized domains: industrial processes, scientific phenomena, professional techniques that rarely get documented at high quality. Video content may work, but here’s where most creators will hit the wall: rights and metadata. Look at the metadata requirements. You need: * Title, subtitle, creator names, release date * Studio/independent status * Creative rights documentation (who owns what) * Talent and production rights (every person visible) * Rights territory and existing licenses * Work-for-hire status * Genre/category classification * Exact video minutes/hours * Language * Content description and summary * Keywords and tags * Views/distribution history * Distribution channels used * Viewer reviews/ratings if applicable * Awards and recognition * Media coverage This isn’t “throw files in a zip folder and get paid.” This is treating your footage like a professional asset. The legal complexity escalates with people. Every identifiable face needs a signed release. Every location might need permission. Every piece of music requires clearance. This is why nature footage, weather phenomena, and process documentation are the cleanest paths. No talent releases. No location complications. Just you, a camera, and something worth documenting. The Facts: Many avoid, a few automate with AI Most creators won’t do this work. The administrative overhead eliminates casual participants. That means less competition for those who take it seriously. Practical experiment (inspired by Tesla’s approach): Take 10 minutes of your archived footage. Watch it with fresh eyes and categorize every 60-second segment: * Standard: Could this be filmed by thousands of other creators? Common angle, predictable motion, everyday scenario? * Premium: Is there something unusual here? An unexpected perspective, rare moment, technical complexity, or temporal uniqueness? Be brutally honest. Most footage is standard. Still it has value. But understanding the ratio helps you know whether you’re sitting on $1/min inventory or $4/min. Action steps: * Conduct the standard vs. premium analysis on a sample of your footage * 4K is the cut off line to $3-4 a minute, and that’s not a guarantee. Lesser quality probably means low end pricing. * Make a list of locations, subjects, or processes you could access that others can’t * Research what’s already available. If 10,000 creators have time-lapses of the Golden Gate Bridge, yours isn’t premium * Identify your unique angle: local access, specialized knowledge, unusual timing, technical skills The Path Forward: Find Demand Before Supply The mistake most creators make: assuming supply creates demand. It doesn’t. Not in this market. The smarter approach: research demand signals before you shoot another frame. Where to look for demand signals: * Study existing platforms (without committing yet): * Troveo shows public categories: nature, sports, new media, scripted vs. unscripted * Notice what’s featured, what categories dominate * This reveals some current demand patterns * Enterprise-level signals: * Protege (enterprise-focused, doesn’t list pricing publicly—that’s actually a positive signal) * They work with hospital systems, media companies, specialized data aggregators * Private pricing suggests higher-value transactions with volume requirements * The unpredictability factor: * Remember the 50,000-hour dog video request? That probably won’t repeat. * But it illustrates how urgent, specific needs create temporary premium pricing * The lesson: diversification and patience matter more than chasing trends To make this work, minimize: * Editing time (raw or minimal editing only) * Rights clearance complexity (avoid people when possible) * Metadata preparation overhead (build templates, automate tagging) * Storage and management costs (organize before you need to) And maximize: * Footage quality (4K minimum for premium rates) * Rights clarity (know what you own completely) * Category alignment with demand (follow platform signals) * On time, every time (capture more in less shooting time) Reality check on current platforms: * Troveo operates as an open marketplace—entry-level, broker model connecting individual creators with AI compa

    17 min
5
out of 5
9 Ratings

About

The AI Optimist cuts through the usual AI noise connecting creators and tech to work together instead of fighting. I'm a Creative AI Strategist who helps creators and businesses co-create with AI, turning humble machines into powerful creative partners. As a subscriber, you will immediately receive The Creator's AI Licensing INTEL. . www.theaioptimist.com