They Might Be Self-Aware

Daniel Bishop, Hunter Powers

They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision. Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he'd rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human. Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called "Rolling Stone for the AI era," which we didn't say first but we're not correcting. New episodes Monday + Thursday. theblur.ai

  1. 1D AGO

    Dead Actors, Deepfakes & Human Sacrifice

    AI deepfakes are fooling job interviewers, world leaders can't prove they're alive, a grandmother went to jail over a facial recognition false match, and a dead Val Kilmer just got cast in a new movie, so Hunter and Daniel ask whether anything on a screen can be trusted anymore. Daniel's proposed solution to the AI accountability crisis: bring back Aztec-style human sacrifice, which is now the official position of this show. === AI deepfakes just cast a dead actor in a new movie, and one podcast host thinks human sacrifice is the answer. This week on They Might Be Self-Aware, nothing is real and nobody can prove otherwise. Netanyahu held up five fingers at a coffee shop to prove he's alive. People said that was fake too because the cash register showed the wrong year. Val Kilmer, who has passed away, is starring in a new film using AI deepfake technology and a cloned version of his voice, with SAG's blessing, his family's sign-off, and his estate getting paid for the work. Deepfake job candidates are ghosting interviewers the second they're asked to put a hand in front of their face. And a grandmother from Tennessee spent six months in jail because AI facial recognition matched her to a bank fraud suspect in North Dakota, a state she's never set foot in. But here's where it gets philosophical. Hunter poses a brutal thought experiment: what if AI could save 6,000 lives a year on the roads, but the price is that nobody is ever held accountable for the 30,000 who still die? Would you take that deal? Turns out, no, because humans demand someone to blame, even if it costs us thousands of lives. Daniel's solution? Bring back human sacrifice. Aztec-style. On top of a pyramid. He's shirtless, he's wearing a headdress, and this is now the official stance of They Might Be Self-Aware. Hunter is dying inside. The algorithm will never show this to anyone. Daniel says find the episodes with four views, those are the spicy ones. The AI deepfake era is here. Nobody can prove they're real. And the only honest response might involve a ziggurat. ⏱️ CHAPTERS 00:00 Gary's Payphone Dispatch 01:53 Hunter Fails to Prove He's Real 03:27 Deepfake Job Interviews Are Out of Control 05:35 Netanyahu's Six Fingers & the Fake Coffee Shop 10:05 How Do You Prove Anything Is Real Anymore? 12:28 AI Facial Recognition Jailed the Wrong Grandma 18:40 Save 6,000 Lives but Nobody Gets Blamed 23:15 Daniel Proposes Human Sacrifice (Official Show Position) 26:06 Val Kilmer's AI Deepfake Movie (He's Dead, by the Way) 29:48 Will AI Turn Movies into Slop or Start a Renaissance? ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Daniel wants this to be the #1 episode to prove the algorithm rewards human sacrifice. Do your part. Subscribe, send this to everyone you know, and comment: should we trust AI deepfake detection, or do we just need a really tall pyramid? 🏛️ 🧠 They Might Be Self-Aware — but are we? #AIDeepfake #PostTruth #TheyMightBeSelfAware

    34 min
  2. 4D AGO

    AGI Is Apparently Here So Why Am I Still Paying $200 a Month

    AGI is here — Jensen Huang said so. So why is Hunter still paying $200/month for AI subscriptions he forgot he had? This week on They Might Be Self-Aware, Hunter and Daniel react to Nvidia CEO Jensen Huang declaring that AGI has been achieved — then immediately watching him walk it back with "it's not conscious, it's not an alien, it's computer software." They put Suno 5.5's AI music generator to the test live on air, humming a melody and getting back a track good enough that Hunter threatens to DMCA-strike his own podcast. Suno-generated music is already charting on iTunes, and producers are now using it to create copyright-free samples — a shift that could reshape how music gets made. The conversation turns to what AGI actually means versus ASI, and whether models like Claude Opus and Qwen 3.5 have crossed that line for most everyday computer tasks. Spoiler: AI still needs a manager, which means middle management lives to fight another day. Hunter confesses to a subscription spending spiral triggered by the $200/month Claude Max plan and his quest to cancel the services he forgot existed. They debate whether AI will widen inequality or whether open-weight models running locally — plus MCP servers and tools making Claude ridiculously capable — will keep the playing field level. An Axios report comparing AI pricing to Uber's subsidize-then-squeeze model leads to an unexpectedly great car analogy involving Hunter's 1996 Land Rover parked next to his Cybertruck. The episode wraps with a sharp breakdown of why consumer AI and enterprise AI are fundamentally different markets — and why enterprise is where the real money is headed. ⏱️ CHAPTERS 0:00 Gary the Producer Has Feelings About AGI 2:36 Suno 5.5 Made a Banger on Air 6:37 What AGI Actually Means — The X% Y% Z% Test 8:40 AI Still Needs a Manager (Middle Management Rejoices) 11:43 Hunter's $200/Month Subscription Intervention 15:52 AI's Uber Pricing Problem 19:48 Why Enterprise AI and Consumer AI Are Different Games ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Daniel's afraid to like his own YouTube videos because the algorithm might punish him. Are we right to fear the algorithm, or has he lost it? Vote in the comments. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AGI #ArtificialIntelligence

    25 min
  3. APR 3

    Why The Sora Shutdown Proves OpenAI is Losing to Anthropic

    The Sora shutdown is official — OpenAI killed its video AI even after Disney put $1B on the table. Anthropic is winning without video, images, or any of it. So why was OpenAI doing it at all? OpenAI just raised $110 billion and still couldn't keep Sora alive. Hunter and Daniel rip into the Sora shutdown, the three competing theories about why OpenAI pulled the plug, and what it means for the Anthropic vs OpenAI battle that's reshaping the entire AI industry. Anthropic subscriptions reportedly climbed 5% in February while OpenAI posted its biggest subscriber decline ever tracked — and Anthropic doesn't even do video. The "everything app" strategy is looking more like a liability than an advantage. On the video side: generating top-tier AI video still costs $8–10 per minute, but open-weight models like LTX 2.3 are closing the gap fast. Hunter actually got one running locally on his MacBook by turning Codex loose in full YOLO mode — left the room, came back, had a rendered video and a slightly broken computer. Now that Sora is gone, who takes the AI video crown? Google's Veo 3 is the obvious frontrunner (and they're already plugging it into their ad network). But Grok is the dark horse nobody's watching — cheap, fast, and getting better multiple times a month. Daniel drops an official 2025 prediction: Disney will partner with Google for AI video by year's end. The logic? If people are already generating Mickey Mouse Ring camera videos with open-weight models, Disney might as well get paid for it. This leads to a genuinely unresolved argument about whether user-generated AI content with brand imagery counts as advertising. Hunter says yes. Daniel says absolutely not. Things mean things, Hunter. ⏱️ CHAPTERS 0:00 Gary vs. the Payphone (Cold Open) 1:36 Your $200/Month AI Plan Is Subsidized Cope 3:14 AI Video Costs $10/Min — We Have the Receipts 5:54 "I'm Going All In on Sora" (About That...) 7:49 Why OpenAI Actually Killed Sora 10:37 Anthropic Is Winning Without Video or Images 12:33 The Video AI Power Vacuum: Veo 3, Grok, Runway 15:23 Disney's $1B Partner Just Died — Now What? 20:04 Is AI-Generated Mickey Mouse an Ad? ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage What's the most reckless thing you've let AI do? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #OpenAI #Anthropic

    25 min
  4. APR 1

    Why Meta Killed the Metaverse (And is Failing at AI)

    Meta's Metaverse is dead — billions burned, Horizon Worlds shut down, and Zuckerberg still can't ship a competitive AI model. Hunter and Daniel break down Meta's 20% layoffs, their failed VR-to-AI pivot, and why the company that renamed itself for the future keeps getting lapped by Google, OpenAI, and Anthropic. First up: Hunter confesses to running Claude Max in "dangerously skip permissions" mode — and his computer might be infected. Right on cue, LiteLLM (an open-source package half the AI industry depends on) got hit with a supply chain attack that tried to steal every API key and password it could find. Then it's Meta's autopsy. Horizon Worlds is dead, and Hunter and Daniel revisit their own failed attempt to podcast inside the Metaverse. Why did VR Chat crush Meta's billion-dollar platform? Hunter's theory: you can't build a product for seven-year-olds and forty-seven-year-olds at the same time. With 20% of the company getting laid off, Meta is betting everything on AI — but their models keep underperforming. Llama 4 disappointed. The rumored "Avocado" model supposedly barely matches what competitors shipped a year ago. Meanwhile, open-weight models from China are eating Meta's lunch. Kimi 2.5 and Qwen 3.5 are running locally on consumer hardware and rivaling the best closed models. Hunter's running Qwen 3.5 on his MacBook Pro and says the chat experience is indistinguishable from Claude or ChatGPT — at least for non-coding tasks. Could the average person ditch their AI subscriptions and go fully local? Almost, but your mom probably isn't installing LM Studio anytime soon. The episode wraps with the "Pirate and Architect" theory — a vision where vibe-coding pirates ship features at the speed of thought and senior architects clean up behind them. Is this Meta's future? Is it everyone's future? And should Zuckerberg just give up on foundational models and use Gemini like Apple? ⏱️ CHAPTERS 0:00 Intro 2:14 Unprotected Claude Sessions 4:16 LiteLLM Got Hacked 6:12 Meta Killed the Metaverse 8:45 Meta's 20% Layoffs 10:55 Why Meta Can't Build Good AI 13:35 Open-Weight Models Are Here 18:47 Could Your Mom Run Local AI? 24:37 Pirates & Architects 29:26 The Twitter/X Playbook 34:14 That's the Whole Conclusion ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Confess in the comments: are you running your AI tools in YOLO mode right now? No judgment. (Okay, a little judgment.) New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #Meta #Metaverse

    35 min
  5. MAR 27

    A Man Used AI to Make a Cancer Vaccine for His Dying Dog

    An AI cancer vaccine actually worked. A man used ChatGPT, Grok, and AlphaFold to build a personalized mRNA vaccine for his dying dog's cancer, and the tumor shrank by half. This week on They Might Be Self-Aware, Hunter and Daniel tear apart the story, debate Claude as your post-op physician, and propose a formal intelligence rating system for kitchen appliances. Daniel had face surgery and immediately pasted his medical notes into Claude like it owes him a consultation. Turns out AI medical advice is surprisingly useful for the 80% of questions that aren't life-or-death, if you prompt it right (Daniel did not prompt it right). Hunter explains the sycophancy problem and how to get honest answers from an LLM instead of digital hand-holding. Then the main event: an Australian man's dog was dying of cancer. He took ChatGPT and Grok down a rabbit hole that led to AlphaFold, a university professor, and a custom mRNA cancer vaccine, the kind of personalized medicine that could eventually win a Nobel Prize. The tumor halved. The dog came back to life. Daniel says AI drug discovery is going to change everything. Hunter says just ask the AI. Andrej Karpathy released "auto research": AI agents that autonomously optimize machine learning models. He gave each agent one hour to beat his hand-tuned results on a small GPT. They beat him by 11%. The hosts get into hyperparameter tuning vs. real architecture changes, and whether spawning an AI researcher with a one-hour lifespan is an ethics problem or just Tuesday. Finally: Philips put a "conversational virtual assistant" in a coffee maker. It's a questionnaire. Daniel is furious. This somehow leads to the invention of standardized AI intelligence levels: Level 0 (the Philips coffee maker) through Level 7 (you'll have to subscribe to find out). ⏱️ CHAPTERS 0:00 Gary Calls from a Raccoon Payphone 1:21 Daniel Fed His Surgery Notes to Claude 7:24 AI Cancer Vaccine Saved a Dying Dog 13:56 Karpathy's Auto Research Beat His Own Brain 22:38 The Fake AI Coffee Maker ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Dog has cancer? Just ask the AI. What's the wildest thing you've actually asked an AI for help with? Drop it in the comments. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AICancerVaccine #TMBSA

    28 min
  6. MAR 23

    Human Brain Cells Learned to Play Doom. Now What?

    Human brain cells in a petri dish learned to play Doom — welcome to wetware AI. Plus: fruit fly brain emulation, digital brain uploads, and forever torture prison for $1,000/month. Researchers wired up living human brain cells on microelectrode arrays and got them running Doom. Not metaphorically. The cells are doing the processing. Hunter calls it inevitable. Daniel calls it horrifying. They're both right. Then it gets weirder. A company called Aeon Systems took a fruit fly brain — the whole brain — mapped every neuron, and dropped a digital copy into a virtual body. The digital fly started doing fly stuff. Their next goal: mouse brains. After that? Yours. That's when Hunter offers Daniel $1,000 a month to rent a copy of his brain and put it in digital hell. Permanently. The conversation spirals from there into whether a digital you is really you, how Coca-Cola would use your brain clone to A/B test Corn Flakes commercials 300,000 times, and exactly how many human organs you'd have to grow in a vat before you've accidentally committed a felony. (Skeleton in a vat? That's a crime. Brain cells on a chip? Apparently that's just science.) They also tackle the AI sin eater problem, why Hunter won't grant personhood to anything in the cloud for at least 500 years, and why Daniel has never once been mean to a hidden Markov model. ⏱️ CHAPTERS 0:00 Meet Gary, Our New Producer 1:39 Hunter & Daniel Are Back — HeyGen, AI-Maxing & More 6:33 Brain Organoid Plays Doom — Wetware Computing Explained 10:04 Fruit Fly Brain Emulation — Aeon Systems' Digital Mind 12:29 Would You Upload Your Brain? Digital Consciousness Debate 16:55 Why You Should Be Nice to Your AI 19:18 Can a Digital Copy of You Have Rights? AI Personhood 20:53 Brain in a Vat — When Does Wetware AI Become Human? 23:24 Are We Living in a Simulation? Wrap-Up ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Hunter offered $1,000/month to rent Daniel's brain and send it to digital hell. What's YOUR price? Drop it in the comments — or tell us no amount is enough. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #WetwareAI #BrainOrganoid #DoomAI

    25 min
  7. MAR 16

    Is AI Killing Your Job? (Meta, Burger King, Jack Dorsey)

    AI job loss is here. Jack Dorsey just laid off 40% of Block despite rising profits — blaming AI. Are smaller AI-powered teams about to replace everyone? Burger King is testing AI that listens to workers in the drive-thru. Meta’s Ray-Ban smart glasses may send private footage to human reviewers. And the DMV just proved AI can’t tell the difference between Spanish… and a Spanish accent. Welcome to the weird early days of AI replacing jobs. This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the biggest signals coming out of the AI economy — from AI job cuts to the rise of AI-managed workers. We cover: • The Jack Dorsey layoffs and why Block stock surged • The hidden world of human annotators reviewing AI data • The Meta Ray-Ban privacy leak and AI training data • Burger King’s AI assistant coaching fast-food employees • And the uncomfortable question nobody wants to answer: If AI makes workers 10× more productive… why wouldn’t companies hire 90% fewer people? The AI revolution may not explode overnight. It might just quietly delete jobs one team at a time. ⏱️ CHAPTERS 00:00 AI Job Loss Panic Begins – Gary’s intro and the growing fear that AI is replacing human jobs 01:13 Is AI Replacing Jobs? – Hunter and Daniel break down the AI job loss debate 02:02 DMV AI Translation Fail – Washington DMV AI mistakes Spanish for a Spanish accent 11:45 Burger King AI Monitoring Workers – Drive-thru AI coaching employees on friendliness 17:18 Jack Dorsey Block Layoffs Explained – Why Block cut 40% of staff despite rising profits ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Comment to prove you’re not an AI agent New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AIJobLoss #ArtificialIntelligence

    37 min
  8. MAR 12

    The Claude AI Military Ban: Why 1.5M Users Left ChatGPT

    Claude AI military drama just exploded. Anthropic refused the Pentagon — and OpenAI stepped in. Now 1.5M users may be leaving ChatGPT. What actually happened when Claude refused Pentagon requests tied to surveillance and autonomous weapons? In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the rapidly escalating clash between Anthropic, OpenAI, and the U.S. government — and why it may be the first true geopolitical battle of the AI era. The story gets wild: • Anthropic’s Claude AI military restrictions trigger a Pentagon standoff • The government reportedly moves to blacklist Anthropic across supply chains • OpenAI steps in almost immediately to take the military AI contract • A backlash erupts as ChatGPT users begin canceling subscriptions And that’s just the beginning. Hunter and Daniel also break down the shocking reports of AWS data centers bombed during an Iran drone attack, the rise of AI-assisted military strategy, and the growing reality of autonomous weapons AI influencing real-world warfare. Plus: • the political implications of David Sacks’ AI policy role • why the Claude vs ChatGPT rivalry just went geopolitical • how Qwen models are suddenly matching Claude benchmarks • why local AI models could destroy the current AI business model If you want to understand where AI, geopolitics, and defense technology are heading next, this episode is essential. ⏱️ CHAPTERS 00:00 Gary’s Dramatic Intro – Claude refuses the Pentagon, OpenAI grabs the contract, and the AI war begins 01:35 Digital Daniel Appears – Testing an AI-generated co-host and the strange future of virtual podcast hosts 02:24 AWS Data Centers Bombed – Iran drone attacks, cloud infrastructure as a wartime target, and AI in military strategy 06:54 Claude vs the Pentagon – Anthropic refuses surveillance and autonomous weapons requests, triggering a government clash 18:52 The Anthropic Blacklist – Supply-chain bans, Palantir involvement, and the OpenAI vs Anthropic power struggle 29:14 The AI Arms Race – $110B OpenAI funding, users leaving ChatGPT, Qwen benchmarks, and the future of local AI ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Should AI companies refuse military contracts or is that dangerously naive? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #ClaudeAI #ChatGPT #ArtificialIntelligence

    35 min

Ratings & Reviews

5
out of 5
6 Ratings

About

They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision. Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he'd rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human. Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called "Rolling Stone for the AI era," which we didn't say first but we're not correcting. New episodes Monday + Thursday. theblur.ai

You Might Also Like