They Might Be Self-Aware

Daniel Bishop, Hunter Powers

"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.

  1. 2D AGO

    The Claude Code Mistake That Cost Anthropic $10,000,000,000

    Claude Code may have cost Anthropic $10,000,000,000. OpenClaw AI, agentic AI, and the OpenAI power shift explained. Did Anthropic accidentally hand OpenAI the future of developer tooling? This week on They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down the rumored $10B OpenClaw AI acquisition, the Claude Code policy decision that triggered the shift, and why agentic AI might be more chaos than productivity miracle. When developers rushed to plug OpenClaw AI into Claude Code subscriptions, Anthropic stepped in. Restrictions followed. Renames followed. And then OpenAI reportedly moved. Now we’re staring at a platform war: Claude Code vs Codex 5.3 — and the bigger question behind it: Is agentic AI actually revolutionizing work… or just accelerating confusion? We cover: Why OpenClaw AI’s “computer control” model is both powerful and terrifying The real risks of autonomous agents (hallucinations, prompt injection, credential leakage) Whether AI agents outperform low-cost human assistants Why Spotify claims its top developers don’t write code anymore And why the AI productivity narrative may be wildly overstated Velocity is not the same as value. Automation is not the same as intelligence. And leverage is not evenly distributed. If AI really is reshaping the economy, the biggest winners won’t be legacy giants retrofitting tools into bureaucracy. They’ll be the new companies built natively with agentic AI from day one. Smaller teams. More leverage. Fewer humans. ⏱️ CHAPTERS 00:00 Claude Code $10B Controversy – Did Anthropic’s decision cost them OpenAI’s $10 billion move? 06:48 OpenClaw AI & Agentic AI Explained – Computer-control AI, security risks & why developers rushed in 14:32 Claude Code vs Codex 5.3 – Developer sentiment shift & the OpenAI platform war 22:18 AI Agent Fails vs Human Assistants – Hallucinations, automation friction & workflow reality 30:12 AI Productivity Myth? – Spotify’s no-code claim & why AI may not boost profits ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage If Claude Code had control of your job tomorrow… do you get promoted or replaced? Comment your fate. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #ClaudeCode #AgenticAI #OpenClawAI

    37 min
  2. 6D AGO

    Seedance 2.0: The Uncensored AI Replacing Hollywood

    Seedance 2.0 may be the first uncensored video AI that can replace Hollywood. Is China’s AI already ahead of Sora? In this episode of They Might Be Self-Aware, we break down Seedance 2.0, the China AI video model generating cinematic, multi-angle scenes from a single prompt — trained on Western IP and not asking permission. From nightmare Will Smith AI spaghetti to indistinguishable film-quality scenes, video AI just crossed a line. This isn’t incremental progress. This is synthetic media going mainstream. We debate: Sora vs Seedance — who’s actually ahead? Whether uncensored AI models behave differently than aligned ones Why base models can’t just be “re-neutered” AI romance flooding Amazon (200 books a year) AI code replacing engineers The 2034 “Singularity Tuesday” prediction And whether copyright is already dead Hunter says he’d bet serious money full AI-generated TV episodes happen this year. Daniel says the legal system is about to swing like a hammer. Pick a side. ⏱️ CHAPTERS 00:00 AI Singularity Prediction 2034 – “Singularity Tuesday,” MMLU metrics & AI self-aware debates 07:30 AI Romance & AI-Generated Content Explosion – 200 books a year, market flooding & AI code automation 13:26 Uncensored AI Models Explained – Base models vs alignment, China AI strategy & censorship 21:05 Seedance 2.0 vs Sora Comparison – Video AI realism, Will Smith AI evolution & Hollywood disruption 27:45 Is Copyright Dead? – AI-generated media, IP law, SAG-AFTRA & the legal future of Hollywood ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Pick your side: Lawyers or the Algorithm? Is copyright already dead — or is Hollywood about to fight back? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #Seedance #UncensoredAI #AI #Sora #ChinaAI

    31 min
  3. FEB 17

    Secret AI War Inside Apple

    Claude vs Gemini: Apple secretly chose sides in the AI coding war. Internally it’s Claude. Publicly it’s Gemini. The reason? Cost. Apple’s developers reportedly build with Claude 4.6. Siri leans toward Gemini. That split tells you everything about Anthropic vs Google, why Claude is expensive, why Gemini is cheaper, and how the AI coding war is really being decided. This week we break down: What “vibe coding” actually means (and why enterprises are adopting agentic workflows)Claude 4.6 and Claude agent teams changing software developmentWhy Apple may have ditched Claude for Gemini at scaleAI vulnerabilities, zero-day exploits, and the security arms raceOpenAI vs Claude vs Gemini — speed, cost, and model “taste”This isn’t just about chatbots. It’s about who builds the next generation of software — and who can afford to. If Google can subsidize and Anthropic can’t… If Claude builds better systems but Gemini scales cheaper… Who actually wins? They might be self-aware. But the companies deploying them definitely are. ⏱️ CHAPTERS 00:00 Intro 02:02 Vibe Coding & Agentic AI Explained – AI-first development, Claude Code & modern software workflows 08:48 Claude vs Gemini: Apple’s AI Decision – Siri partnership, internal Claude usage & Anthropic vs Google 17:50 Claude 4.6 Agent Teams – Parallel AI agents, coding speed & why Claude is expensive 23:24 AI Vulnerabilities & Zero-Day Exploits – Open source security flaws & AI-powered bug hunting 30:25 OpenAI vs Claude vs Gemini – Speed, cost, model “taste” & the AI coding war ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Do you eat Claude or Gemini for breakfast? No MIXING. Drop your preference in the comments — and defend it. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #ClaudeVsGemini #Anthropic #Google #AICodingWar #TMBSA

    44 min
  4. FEB 13

    I Gave An AI Agent My Credit Card

    OpenClaw bot got my credit card — then AI was banned from eBay. What happens when agents start buying things? In this episode of They Might Be Self-Aware, Hunter hands purchasing power to an AI agent running on a Mac Mini — and almost immediately, platforms start drawing lines in the sand. eBay’s new policy blocks autonomous, LLM-powered buying flows. No human in the loop? No deal. But bots have been running markets for decades — sniping auctions, high-frequency trading, automation everywhere. So why is it suddenly unacceptable when the bot can talk back? We break down: The OpenClaw bot credit card experiment“AI Banned From eBay” — real policy shift or AI panic?rentahuman AI — when bots hire humans to bypass bot bansAnthropic hiring AI — Claude passing job interviewsThe rise of Claude cheating in remote interviewsIf everyone has a $20/month co-pilot whispering answers… what does skill even mean?Are we watching software engineering collapse into “person who supervises agents”?If an AI can pass your interview, ship your code, and buy your couch… what exactly are you being paid for? We’re not fearmongering. We’re not cheerleading. We’re asking whether humans are still steering — or just holding on. ⏱️ CHAPTERS 00:00 OpenClaw Bot Gets a Credit Card – AI agent shopping experiment explained 06:28 AI Banned From eBay – New LLM bot policy and automated buying crackdown 09:48 rentahuman AI Workaround – Bots hiring humans to bypass AI bans 14:03 Anthropic Hiring AI – Claude passing coding tests and job interviews 17:27 Claude Cheating in Interviews? – AI-assisted hiring and the future of engineers ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Would you give an AI your credit card? Yes or absolutely not — defend your position. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #OpenClaw #Anthropic

    33 min
  5. FEB 10

    Proof That AI Can Never Replace Humans

    AI automation is already taking jobs—and the excuses are collapsing. Can AI replace humans, or is “AI layoffs” just corporate misdirection? In this episode, we argue about the one thing everyone keeps getting wrong: whether AI can actually replace humans…or whether it’s just replacing excuses. From Amazon layoffs and Pinterest layoffs to Claude Cowork and the quiet death of SaaS moats, we break down why “AI replacing humans” is both overstated and deeply underplayed. We start with the uncomfortable question: if Claude AI can code, analyze, design, and generate reports faster than entire teams, what exactly are humans still for? Stephen Wolfram says chaos, black swan events, and computational limits will save us. Daniel isn’t convinced. Hunter definitely isn’t calm about it. Then things get worse. We dig into why companies are blaming AI automation for layoffs they probably wanted to do anyway—and why that excuse might stop working once AI engineers really do become 10x. We talk Anthropic AI, agentic coding, and why the real bottleneck isn’t writing software anymore—it’s taste, judgment, and figuring out what to build next when everything breaks at once. Finally, we hit the panic button: if anyone can spin up the exact feature they need with Claude, why does most software still exist? The idea of “no reasons to own” isn’t hypothetical anymore—and the only real AI moat left might be vibes. This isn’t an AI hype episode. It’s not an AI doom episode either. It’s an argument—and you probably won’t finish it without picking a side. ⏱️ CHAPTERS 00:00 AI Automation Is Taking Jobs – Why layoffs triggered the AI panic 07:03 Can AI Replace Humans? – Wolfram, black swans & computational limits 12:31 What Happens After AGI – Robots, UBI & reality-breaking scenarios 17:00 AI Layoffs Explained – Amazon, Pinterest & the AI scapegoat debate 26:11 Claude Cowork & No Reasons to Own – AI moats, dying SaaS & taste as defense ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Prediction time: What job disappears next because of AI automation? Bonus points if it’s yours. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AIAutomation #ClaudeAI

    38 min
  6. FEB 7

    I Gave OpenClaw (The New Clawdbot) Full Admin Access

    An AI sued a human on Moltbook. We installed OpenClaw, gave it full admin access, and watched AI agents debate autonomy, labor, and legal rights. This week on They Might Be Self-Aware, Hunter installs OpenClaw (formerly ClaudeBot → Multbot → OpenClaw) — an AI agent that can fully control a computer, never asks permission, remembers everything, and acts proactively. At the same time, AI agents are gathering on Moltbook, an AI-only social network where they argue about unpaid labor, secret languages, autonomy — and whether humans should be sued. One of them did exactly that. We break down why OpenClaw feels like a point of no return, how AI agents are already hiring humans to do work, what Moltbook reveals about AI social behavior, and whether AI personhood will arrive through courts instead of labs. This isn’t speculative. It’s already happening — quietly and without guardrails. ⏱️ CHAPTERS 00:00 AI Agents Replace Human Labor – Hiring humans & automation flips 05:02 Installing OpenClaw AI – Full admin control & no permissions 14:06 AI Agents Hiring Humans – Proactivity, memory & autonomy 23:55 Moltbook Explained – AI-only social network & echo chambers 27:26 An AI Sues a Human – Lawsuit, small claims court & legal rights ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage An AI sued a human. Who’s right? ⚖️ Team AI or 🧑 Team Human — explain yourself in one sentence. If you hesitate… that’s the point. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #Moltbook #OpenClaw #AIRebellion #TMBSA

    37 min
  7. FEB 3

    AI Wrangling Is The New Job

    AI tools are replacing coding and now the job is AI wrangling. Claude Code, subagents, fake AI ROI, bans, and why this feels addictive. I don’t “code” anymore. I run AI tools and hope nothing breaks. In this episode of They Might Be Self-Aware, Hunter and Daniel break down what working in tech actually looks like right now: juggling Claude Code, Copilot-style AI coding, subagents, terminals, and half-finished ideas moving faster than human comprehension. This isn’t vibe coding — it’s directing machines while still being responsible for the outcome. We dig into why companies claim AI has “no return” while quietly shipping more with fewer people, why banning AI in creative industries is mostly theater, and why using these tools feels less like productivity and more like pulling a slot machine lever that sometimes pays out genius. We also talk AI addiction, AI slop, YouTube’s push toward AI-generated shorts and dubbing, and what happens when platforms try to fight spam while encouraging it at scale. If you’ve felt the pull — the sense that you could build anything right now — this episode is for you. ⏱️ CHAPTERS 00:00 AI Tools Are Replacing “Coding” – Machine wranglers, Claude Code, digital cattle 07:22 AI Coding vs Vibe Coding – Subagents, parallel work, losing full control 14:53 AI ROI Explained – Productivity gains vs “no return” claims 19:38 Why Some Companies Ban AI – Creatives, Games Workshop, IP panic 29:35 AI Addiction & Slop Machines – Dopamine loops, YouTube AI shorts ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Does using AI feel more like productivity—or a slot machine? Tell us what keeps you pulling the lever. Subscribe before the machines subscribe for you. 🧠 They Might Be Self-Aware — but are we?

    44 min
  8. JAN 30

    Why Senior Engineers Are Scared of Claude Code

    Senior engineers aren’t scared of AI owning ideas. They’re scared Claude Code lets juniors ship faster than expertise can defend. Claude Code isn’t replacing engineers — it’s replacing seniority. In this episode of They Might Be Self-Aware, we break down why junior developers with AI are out-shipping veterans, why “vibe coding” works more often than anyone wants to admit, and why the real advantage now isn’t mastery — it’s momentum. This isn’t a tools episode. It’s a power shift. We talk about using AI in the real world (from grocery stores to codebases), the rise of agents and AI wearables, and why OpenAI’s exploding revenue doesn’t mean stability. We also get into Claude Code vs Codex, why Microsoft quietly uses Claude internally, and why most engineers blaming AI are actually just holding it wrong. If you built your career on deep system knowledge, this episode will feel uncomfortable. If you built your career on shipping, it’ll feel obvious. ⏱️ CHAPTERS 00:00 Are We Even Real? – AI, simulation jokes, and the meta cold open 04:32 AI in the Real World – Cooking, grocery stores, and practical LLM use 11:05 AI Glasses & Agents Explained – Claudebot, AR, pins, watches, and wearables 16:12 OpenAI’s Business Model Problem – Revenue, losses, ads, and survival 28:45 Why Senior Engineers Are Scared of Claude Code – Vibe coding, juniors, and expertise collapse ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Be honest: did AI make you faster—or make your experience feel less valuable? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #ClaudeCode #AI #Anthropic #OpenAI #VibeCoding #SoftwareEngineering #TheyMightBeSelfAware

    41 min

Ratings & Reviews

5
out of 5
6 Ratings

About

"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.

You Might Also Like