They Might Be Self-Aware

Daniel Bishop, Hunter Powers

They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision. Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he'd rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human. Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called "Rolling Stone for the AI era," which we didn't say first but we're not correcting. New episodes Monday + Thursday. theblur.ai

  1. 1D AGO

    The Claude AI Military Ban: Why 1.5M Users Left ChatGPT

    Claude AI military drama just exploded. Anthropic refused the Pentagon — and OpenAI stepped in. Now 1.5M users may be leaving ChatGPT. What actually happened when Claude refused Pentagon requests tied to surveillance and autonomous weapons? In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the rapidly escalating clash between Anthropic, OpenAI, and the U.S. government — and why it may be the first true geopolitical battle of the AI era. The story gets wild: • Anthropic’s Claude AI military restrictions trigger a Pentagon standoff • The government reportedly moves to blacklist Anthropic across supply chains • OpenAI steps in almost immediately to take the military AI contract • A backlash erupts as ChatGPT users begin canceling subscriptions And that’s just the beginning. Hunter and Daniel also break down the shocking reports of AWS data centers bombed during an Iran drone attack, the rise of AI-assisted military strategy, and the growing reality of autonomous weapons AI influencing real-world warfare. Plus: • the political implications of David Sacks’ AI policy role • why the Claude vs ChatGPT rivalry just went geopolitical • how Qwen models are suddenly matching Claude benchmarks • why local AI models could destroy the current AI business model If you want to understand where AI, geopolitics, and defense technology are heading next, this episode is essential. ⏱️ CHAPTERS 00:00 Gary’s Dramatic Intro – Claude refuses the Pentagon, OpenAI grabs the contract, and the AI war begins 01:35 Digital Daniel Appears – Testing an AI-generated co-host and the strange future of virtual podcast hosts 02:24 AWS Data Centers Bombed – Iran drone attacks, cloud infrastructure as a wartime target, and AI in military strategy 06:54 Claude vs the Pentagon – Anthropic refuses surveillance and autonomous weapons requests, triggering a government clash 18:52 The Anthropic Blacklist – Supply-chain bans, Palantir involvement, and the OpenAI vs Anthropic power struggle 29:14 The AI Arms Race – $110B OpenAI funding, users leaving ChatGPT, Qwen benchmarks, and the future of local AI ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Should AI companies refuse military contracts or is that dangerously naive? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #ClaudeAI #ChatGPT #ArtificialIntelligence

    35 min
  2. 5D AGO

    Who's the Fall Guy for AI? Claude Code Just Broke IBM

    Claude kills IBM? Anthropic’s Claude Code just learned COBOL—and IBM had its worst day in decades. Is AI about to eat legacy tech? This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop unpack the chaos behind the “Claude kills IBM” narrative. Anthropic’s Claude Code suddenly got good at COBOL, the ancient language quietly running massive parts of global banking infrastructure. When the news hit, IBM stock dropped hard—because if AI can maintain legacy code, IBM’s biggest moat might disappear. We break down: • Why Claude Code vs COBOL spooked investors • The logic behind the IBM stock panic • Goldman Sachs claiming AI barely affects GDP… while cutting jobs for AI • The rise of the AI “sin eater” — the human who takes the blame when AI screws up Because even if AI replaces analysts and COBOL engineers… someone still has to take the fall. ⏱️ CHAPTERS 00:00 AI “Sin Eater” Explained – Who takes the blame when AI makes a mistake? 09:06 Goldman Sachs AI Contradiction – Job cuts, AI hype, and the anti-AI fund 13:29 Claude Code Learns COBOL – Anthropic’s AI tackles legacy banking code 14:27 Did Claude Kill IBM? – Why the Claude COBOL news triggered an IBM stock panic 21:29 AI Hallucinations in Business – When companies start making decisions on fake AI data ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Comment to prove you’re human: type SIN EATER and tell us — when AI makes a mistake, who should take the blame? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #ClaudeCode #IBM

    28 min
  3. MAR 6

    The AI Honeymoon Is Over | Claude, OpenClaw & AI Fatigue

    Claude is getting better… but the AI hype cycle might be slowing down. We debate Claude Code, Claude Skills, AI persona prompting, and why the AI honeymoon may already be over. Topics in this episode • Claude Code • Claude Opus 4.6 • Anthropic Claude Skills • Claude institutional memory • AI persona prompting • AI context engineering • AI fatigue and the AI hype cycle • OpenClaw AI agent experiment • AI ethics and autonomous agents • Local Dolphin LLM models ⏱️ CHAPTERS 00:00 Is the AI Honeymoon Over? – AI maturity, model fatigue, and why new releases feel less revolutionary 07:51 Claude Code & Opus Model Upgrades – How Claude fits into daily workflows and why upgrades now feel incremental 18:19 Claude Skills vs Claude.md – Anthropic’s institutional memory system and how agents store context 21:14 AI Persona Prompting vs Generic Prompting – Why framing the model like a real expert can change outputs 29:35 OpenClaw, AI Ethics & Gary – When an AI agent refuses to create a Reddit account because of its “moral code” ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Serious question: if your AI assistant refused to do something because of its “ethics”… Would you respect the boundary or replace it with a less moral AI? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #Claude #ArtificialIntelligence

    35 min
  4. MAR 3

    AI Filmmaking Killed The Hollywood Star

    AI filmmaking is replacing Hollywood faster than anyone expected. Seedance video AI just made $100M movies look optional. This week on They Might Be Self-Aware, we break down the moment AI filmmaking stopped being a novelty and started becoming an economic replacement. If AI can generate 95% of a blockbuster at 1% of the cost, what happens to actors, studios, and production crews? The spreadsheet wins. We also dive into: 00:00 AI Agents & Productivity Limits – Claude Sonnet 4, burnout & “infinite Tim Cook” 09:47 Meta Digital Twin Afterlife – AI clones, digital twin death & Ship of Theseus 19:50 AI Open Source Scandal – AI hit piece after rejected pull request 24:48 AI Filmmaking Replacing Hollywood? – Seedance video AI vs blockbuster economics 37:21 The Future of AI Identity – Legal chaos, liability & what breaks next AI isn’t just assisting anymore. It’s starting to act as you. Hollywood is just the first domino. ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage If AI could make a blockbuster for $1M tomorrow… do you still want humans in your movies? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AIFilmmaking #Seedance #ClaudeAI #DigitalTwin

    39 min
  5. FEB 24

    The Claude Code Mistake That Cost Anthropic $10,000,000,000

    Claude Code may have cost Anthropic $10,000,000,000. OpenClaw AI, agentic AI, and the OpenAI power shift explained. Did Anthropic accidentally hand OpenAI the future of developer tooling? This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the rumored $10B OpenClaw AI acquisition, the Claude Code policy decision that triggered the shift, and why agentic AI might be more chaos than productivity miracle. When developers rushed to plug OpenClaw AI into Claude Code subscriptions, Anthropic stepped in. Restrictions followed. Renames followed. And then OpenAI reportedly moved. Now we’re staring at a platform war: Claude Code vs Codex 5.3 — and the bigger question behind it: Is agentic AI actually revolutionizing work… or just accelerating confusion? We cover: Why OpenClaw AI’s “computer control” model is both powerful and terrifying The real risks of autonomous agents (hallucinations, prompt injection, credential leakage) Whether AI agents outperform low-cost human assistants Why Spotify claims its top developers don’t write code anymore And why the AI productivity narrative may be wildly overstated Velocity is not the same as value. Automation is not the same as intelligence. And leverage is not evenly distributed. If AI really is reshaping the economy, the biggest winners won’t be legacy giants retrofitting tools into bureaucracy. They’ll be the new companies built natively with agentic AI from day one. Smaller teams. More leverage. Fewer humans. ⏱️ CHAPTERS 00:00 Claude Code $10B Controversy – Did Anthropic’s decision cost them OpenAI’s $10 billion move? 06:48 OpenClaw AI & Agentic AI Explained – Computer-control AI, security risks & why developers rushed in 14:32 Claude Code vs Codex 5.3 – Developer sentiment shift & the OpenAI platform war 22:18 AI Agent Fails vs Human Assistants – Hallucinations, automation friction & workflow reality 30:12 AI Productivity Myth? – Spotify’s no-code claim & why AI may not boost profits ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage If Claude Code had control of your job tomorrow… do you get promoted or replaced? Comment your fate. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #ClaudeCode #AgenticAI #OpenClawAI

    37 min
  6. FEB 20

    Seedance 2.0: The Uncensored AI Replacing Hollywood

    Seedance 2.0 may be the first uncensored video AI that can replace Hollywood. Is China’s AI already ahead of Sora? In this episode of They Might Be Self-Aware, we break down Seedance 2.0, the China AI video model generating cinematic, multi-angle scenes from a single prompt — trained on Western IP and not asking permission. From nightmare Will Smith AI spaghetti to indistinguishable film-quality scenes, video AI just crossed a line. This isn’t incremental progress. This is synthetic media going mainstream. We debate: Sora vs Seedance — who’s actually ahead? Whether uncensored AI models behave differently than aligned ones Why base models can’t just be “re-neutered” AI romance flooding Amazon (200 books a year) AI code replacing engineers The 2034 “Singularity Tuesday” prediction And whether copyright is already dead Hunter says he’d bet serious money full AI-generated TV episodes happen this year. Daniel says the legal system is about to swing like a hammer. Pick a side. ⏱️ CHAPTERS 00:00 AI Singularity Prediction 2034 – “Singularity Tuesday,” MMLU metrics & AI self-aware debates 07:30 AI Romance & AI-Generated Content Explosion – 200 books a year, market flooding & AI code automation 13:26 Uncensored AI Models Explained – Base models vs alignment, China AI strategy & censorship 21:05 Seedance 2.0 vs Sora Comparison – Video AI realism, Will Smith AI evolution & Hollywood disruption 27:45 Is Copyright Dead? – AI-generated media, IP law, SAG-AFTRA & the legal future of Hollywood ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Pick your side: Lawyers or the Algorithm? Is copyright already dead — or is Hollywood about to fight back? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #Seedance #UncensoredAI #AI #Sora #ChinaAI

    31 min
  7. FEB 17

    Secret AI War Inside Apple

    Claude vs Gemini: Apple secretly chose sides in the AI coding war. Internally it’s Claude. Publicly it’s Gemini. The reason? Cost. Apple’s developers reportedly build with Claude 4.6. Siri leans toward Gemini. That split tells you everything about Anthropic vs Google, why Claude is expensive, why Gemini is cheaper, and how the AI coding war is really being decided. This week we break down: What “vibe coding” actually means (and why enterprises are adopting agentic workflows)Claude 4.6 and Claude agent teams changing software developmentWhy Apple may have ditched Claude for Gemini at scaleAI vulnerabilities, zero-day exploits, and the security arms raceOpenAI vs Claude vs Gemini — speed, cost, and model “taste”This isn’t just about chatbots. It’s about who builds the next generation of software — and who can afford to. If Google can subsidize and Anthropic can’t… If Claude builds better systems but Gemini scales cheaper… Who actually wins? They might be self-aware. But the companies deploying them definitely are. ⏱️ CHAPTERS 00:00 Intro 02:02 Vibe Coding & Agentic AI Explained – AI-first development, Claude Code & modern software workflows 08:48 Claude vs Gemini: Apple’s AI Decision – Siri partnership, internal Claude usage & Anthropic vs Google 17:50 Claude 4.6 Agent Teams – Parallel AI agents, coding speed & why Claude is expensive 23:24 AI Vulnerabilities & Zero-Day Exploits – Open source security flaws & AI-powered bug hunting 30:25 OpenAI vs Claude vs Gemini – Speed, cost, model “taste” & the AI coding war ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Do you eat Claude or Gemini for breakfast? No MIXING. Drop your preference in the comments — and defend it. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #ClaudeVsGemini #Anthropic #Google #AICodingWar #TMBSA

    44 min
  8. FEB 13

    I Gave An AI Agent My Credit Card

    OpenClaw bot got my credit card — then AI was banned from eBay. What happens when agents start buying things? In this episode of They Might Be Self-Aware, Hunter hands purchasing power to an AI agent running on a Mac Mini — and almost immediately, platforms start drawing lines in the sand. eBay’s new policy blocks autonomous, LLM-powered buying flows. No human in the loop? No deal. But bots have been running markets for decades — sniping auctions, high-frequency trading, automation everywhere. So why is it suddenly unacceptable when the bot can talk back? We break down: The OpenClaw bot credit card experiment“AI Banned From eBay” — real policy shift or AI panic?rentahuman AI — when bots hire humans to bypass bot bansAnthropic hiring AI — Claude passing job interviewsThe rise of Claude cheating in remote interviewsIf everyone has a $20/month co-pilot whispering answers… what does skill even mean?Are we watching software engineering collapse into “person who supervises agents”?If an AI can pass your interview, ship your code, and buy your couch… what exactly are you being paid for? We’re not fearmongering. We’re not cheerleading. We’re asking whether humans are still steering — or just holding on. ⏱️ CHAPTERS 00:00 OpenClaw Bot Gets a Credit Card – AI agent shopping experiment explained 06:28 AI Banned From eBay – New LLM bot policy and automated buying crackdown 09:48 rentahuman AI Workaround – Bots hiring humans to bypass AI bans 14:03 Anthropic Hiring AI – Claude passing coding tests and job interviews 17:27 Claude Cheating in Interviews? – AI-assisted hiring and the future of engineers ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Would you give an AI your credit card? Yes or absolutely not — defend your position. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #OpenClaw #Anthropic

    33 min

Ratings & Reviews

5
out of 5
6 Ratings

About

They Might Be Self-Aware is a show about what it actually feels like to live through the AI revolution. Not from a safe distance. From inside the collision. Hunter Powers and Daniel Bishop host. Gary produces, from a payphone, for reasons he'd rather not discuss. The format is the thesis: AI cast members, unscripted machine interactions, and a deliberate refusal to always tell you which voice in the room is human. Every Tuesday, the Doomsday Clock moves. Every episode, the blur between human and AI gets a little harder to see. The show has been called "Rolling Stone for the AI era," which we didn't say first but we're not correcting. New episodes Monday + Thursday. theblur.ai

You Might Also Like