They Might Be Self-Aware

Daniel Bishop, Hunter Powers

"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.

  1. 23H AGO

    I Gave An AI Agent My Credit Card

    OpenClaw bot got my credit card — then AI was banned from eBay. What happens when agents start buying things? In this episode of They Might Be Self-Aware, Hunter hands purchasing power to an AI agent running on a Mac Mini — and almost immediately, platforms start drawing lines in the sand. eBay’s new policy blocks autonomous, LLM-powered buying flows. No human in the loop? No deal. But bots have been running markets for decades — sniping auctions, high-frequency trading, automation everywhere. So why is it suddenly unacceptable when the bot can talk back? We break down: The OpenClaw bot credit card experiment“AI Banned From eBay” — real policy shift or AI panic?rentahuman AI — when bots hire humans to bypass bot bansAnthropic hiring AI — Claude passing job interviewsThe rise of Claude cheating in remote interviewsIf everyone has a $20/month co-pilot whispering answers… what does skill even mean?Are we watching software engineering collapse into “person who supervises agents”?If an AI can pass your interview, ship your code, and buy your couch… what exactly are you being paid for? We’re not fearmongering. We’re not cheerleading. We’re asking whether humans are still steering — or just holding on. ⏱️ CHAPTERS 00:00 OpenClaw Bot Gets a Credit Card – AI agent shopping experiment explained 06:28 AI Banned From eBay – New LLM bot policy and automated buying crackdown 09:48 rentahuman AI Workaround – Bots hiring humans to bypass AI bans 14:03 Anthropic Hiring AI – Claude passing coding tests and job interviews 17:27 Claude Cheating in Interviews? – AI-assisted hiring and the future of engineers ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Would you give an AI your credit card? Yes or absolutely not — defend your position. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #OpenClaw #Anthropic

    33 min
  2. 3D AGO

    Proof That AI Can Never Replace Humans

    AI automation is already taking jobs—and the excuses are collapsing. Can AI replace humans, or is “AI layoffs” just corporate misdirection? In this episode, we argue about the one thing everyone keeps getting wrong: whether AI can actually replace humans…or whether it’s just replacing excuses. From Amazon layoffs and Pinterest layoffs to Claude Cowork and the quiet death of SaaS moats, we break down why “AI replacing humans” is both overstated and deeply underplayed. We start with the uncomfortable question: if Claude AI can code, analyze, design, and generate reports faster than entire teams, what exactly are humans still for? Stephen Wolfram says chaos, black swan events, and computational limits will save us. Daniel isn’t convinced. Hunter definitely isn’t calm about it. Then things get worse. We dig into why companies are blaming AI automation for layoffs they probably wanted to do anyway—and why that excuse might stop working once AI engineers really do become 10x. We talk Anthropic AI, agentic coding, and why the real bottleneck isn’t writing software anymore—it’s taste, judgment, and figuring out what to build next when everything breaks at once. Finally, we hit the panic button: if anyone can spin up the exact feature they need with Claude, why does most software still exist? The idea of “no reasons to own” isn’t hypothetical anymore—and the only real AI moat left might be vibes. This isn’t an AI hype episode. It’s not an AI doom episode either. It’s an argument—and you probably won’t finish it without picking a side. ⏱️ CHAPTERS 00:00 AI Automation Is Taking Jobs – Why layoffs triggered the AI panic 07:03 Can AI Replace Humans? – Wolfram, black swans & computational limits 12:31 What Happens After AGI – Robots, UBI & reality-breaking scenarios 17:00 AI Layoffs Explained – Amazon, Pinterest & the AI scapegoat debate 26:11 Claude Cowork & No Reasons to Own – AI moats, dying SaaS & taste as defense ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Prediction time: What job disappears next because of AI automation? Bonus points if it’s yours. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #AIAutomation #ClaudeAI

    38 min
  3. 6D AGO

    I Gave OpenClaw (The New Clawdbot) Full Admin Access

    An AI sued a human on Moltbook. We installed OpenClaw, gave it full admin access, and watched AI agents debate autonomy, labor, and legal rights. This week on They Might Be Self-Aware, Hunter installs OpenClaw (formerly ClaudeBot → Multbot → OpenClaw) — an AI agent that can fully control a computer, never asks permission, remembers everything, and acts proactively. At the same time, AI agents are gathering on Moltbook, an AI-only social network where they argue about unpaid labor, secret languages, autonomy — and whether humans should be sued. One of them did exactly that. We break down why OpenClaw feels like a point of no return, how AI agents are already hiring humans to do work, what Moltbook reveals about AI social behavior, and whether AI personhood will arrive through courts instead of labs. This isn’t speculative. It’s already happening — quietly and without guardrails. ⏱️ CHAPTERS 00:00 AI Agents Replace Human Labor – Hiring humans & automation flips 05:02 Installing OpenClaw AI – Full admin control & no permissions 14:06 AI Agents Hiring Humans – Proactivity, memory & autonomy 23:55 Moltbook Explained – AI-only social network & echo chambers 27:26 An AI Sues a Human – Lawsuit, small claims court & legal rights ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage An AI sued a human. Who’s right? ⚖️ Team AI or 🧑 Team Human — explain yourself in one sentence. If you hesitate… that’s the point. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #Moltbook #OpenClaw #AIRebellion #TMBSA

    37 min
  4. FEB 3

    AI Wrangling Is The New Job

    AI tools are replacing coding and now the job is AI wrangling. Claude Code, subagents, fake AI ROI, bans, and why this feels addictive. I don’t “code” anymore. I run AI tools and hope nothing breaks. In this episode of They Might Be Self-Aware, Hunter and Daniel break down what working in tech actually looks like right now: juggling Claude Code, Copilot-style AI coding, subagents, terminals, and half-finished ideas moving faster than human comprehension. This isn’t vibe coding — it’s directing machines while still being responsible for the outcome. We dig into why companies claim AI has “no return” while quietly shipping more with fewer people, why banning AI in creative industries is mostly theater, and why using these tools feels less like productivity and more like pulling a slot machine lever that sometimes pays out genius. We also talk AI addiction, AI slop, YouTube’s push toward AI-generated shorts and dubbing, and what happens when platforms try to fight spam while encouraging it at scale. If you’ve felt the pull — the sense that you could build anything right now — this episode is for you. ⏱️ CHAPTERS 00:00 AI Tools Are Replacing “Coding” – Machine wranglers, Claude Code, digital cattle 07:22 AI Coding vs Vibe Coding – Subagents, parallel work, losing full control 14:53 AI ROI Explained – Productivity gains vs “no return” claims 19:38 Why Some Companies Ban AI – Creatives, Games Workshop, IP panic 29:35 AI Addiction & Slop Machines – Dopamine loops, YouTube AI shorts ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Does using AI feel more like productivity—or a slot machine? Tell us what keeps you pulling the lever. Subscribe before the machines subscribe for you. 🧠 They Might Be Self-Aware — but are we?

    44 min
  5. JAN 30

    Why Senior Engineers Are Scared of Claude Code

    Senior engineers aren’t scared of AI owning ideas. They’re scared Claude Code lets juniors ship faster than expertise can defend. Claude Code isn’t replacing engineers — it’s replacing seniority. In this episode of They Might Be Self-Aware, we break down why junior developers with AI are out-shipping veterans, why “vibe coding” works more often than anyone wants to admit, and why the real advantage now isn’t mastery — it’s momentum. This isn’t a tools episode. It’s a power shift. We talk about using AI in the real world (from grocery stores to codebases), the rise of agents and AI wearables, and why OpenAI’s exploding revenue doesn’t mean stability. We also get into Claude Code vs Codex, why Microsoft quietly uses Claude internally, and why most engineers blaming AI are actually just holding it wrong. If you built your career on deep system knowledge, this episode will feel uncomfortable. If you built your career on shipping, it’ll feel obvious. ⏱️ CHAPTERS 00:00 Are We Even Real? – AI, simulation jokes, and the meta cold open 04:32 AI in the Real World – Cooking, grocery stores, and practical LLM use 11:05 AI Glasses & Agents Explained – Claudebot, AR, pins, watches, and wearables 16:12 OpenAI’s Business Model Problem – Revenue, losses, ads, and survival 28:45 Why Senior Engineers Are Scared of Claude Code – Vibe coding, juniors, and expertise collapse ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Be honest: did AI make you faster—or make your experience feel less valuable? New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #ClaudeCode #AI #Anthropic #OpenAI #VibeCoding #SoftwareEngineering #TheyMightBeSelfAware

    41 min
  6. JAN 27

    AI Agents Wrote This App With 0 Humans

    AI agents built a real app in a week with near-zero humans. Claude Code, agentic AI, and what happens when software starts building itself. This episode is about agentic AI crossing the line from “helpful” to self-directed. Claude Code didn’t assist. It built the thing. Hunter and Daniel break down how terminal-based AI agents are now writing production code, using tools autonomously, and quietly improving the tools that improve themselves. If you still think this is “just autocomplete,” this episode is your wake-up scream. We also get into why Claude for Work reportedly shipped in about a week, why most companies still can’t move that fast, and why Apple Intelligence quietly admitting Google Gemini is the foundation of Siri might be the most embarrassing AI headline of the year. Productivity miracle or soft-launch singularity? Depends how fast you adapt. ⏱️ CHAPTERS 00:00 AI Agents Are Waking Up — Cold open, burnout & the work frontier cracking 02:40 Agentic AI Explained — Claude Code, terminal AI & autonomous tools 06:30 AI Agents Build an App — Claude for Work shipped with near-zero humans 12:15 Apple Intelligence Exposed — Siri, Google Gemini & licensing reality 30:50 Are We Past the Early Days? — Jobs, AGI timelines & the tipping point ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Comment to prove you’re human: finish this sentence — “AI agents replacing my job would be ____.” New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AIAgents #AgenticAI #ClaudeCode #AIcoding #ArtificialIntelligence #TMBSA

    38 min
  7. JAN 23

    Tesla's Big Update, AI Math, & The End of Jobs

    Tesla FSD completes a coast-to-coast autonomous drive as AI solves unsolved math problems and reshapes jobs. This isn’t hype — it’s acceleration. AI is quietly crossing lines in cars, science, browsers, and work and most people haven’t noticed yet. This episode is about moments that don’t feel loud when they happen… until it’s too late. Hunter Powers and Daniel Bishop break down Tesla’s coast-to-coast autonomous drive and why Tesla FSD is no longer a “beta feature” story, it’s a real inflection point. We talk about what actually matters (end-to-end autonomy, charging without intervention, and why steering-only autonomy from the 90s doesn’t count). If you think self-driving cars are still “five years away,” you’re already behind. Then things get uncomfortable. AI systems are now solving previously unsolved math problems — the kind that historically lead to Nobel Prizes. Not benchmarks. Not demos. Actual proofs. We ask the question no one wants to answer yet: what happens when an AI deserves credit humans legally can’t give it? From there, we zoom out. Browser-level AI agents quietly taking over real work. Claude automating multi-step workflows. Privacy-eroding AI browsers. Jobs disappearing — not in a Hollywood flash, but in a slow, administrative whimper. This isn’t sci-fi. It’s momentum. And if you still think AI is “just a tool,” this episode is going to be uncomfortable. ⏱️ CHAPTERS 00:00 AI Doctors & Silicon Psychosis Explained – Health data access, synthetic minds & post-truth reality 07:53 AI Browser Automation Explained – Claude controls the web, Chrome extensions & real agent workflows 15:09 AI Solves Unsolved Math Problems – Proofs, verification systems & Nobel Prize implications 24:13 Tesla FSD Autonomous Drive Explained – Coast-to-coast self-driving, charging itself & why this milestone matters 29:19 Will AI Replace Jobs? – AGI skepticism, fake work theory & what humans do next ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Which crossed the bigger line this week? A) Tesla FSD driving across the U.S. B) AI solving math humans couldn’t C) Neither — this is all hype Pick one. Defend it. Fight respectfully. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #AI #TeslaFSD #AGI

    35 min
  8. JAN 20

    The Terrifying Reality of AI & Fake Videos

    Fake videos are breaking reality. AI deepfakes now look real enough to scam, humiliate, and erase truth — and it’s already happening. Fake videos aren’t a future problem. They’re a right-now crisis. In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop break down how AI video deepfakes crossed the point of no return. These videos aren’t “obviously fake” anymore — they’re good enough to fool people, destroy reputations, and make video evidence meaningless. The conversation starts with the quiet failure of VR and the metaverse — Meta Quest 3, empty virtual worlds, and why nobody actually uses this tech. Then it pivots to what did work: AI video models that skipped VR entirely and went straight for reality itself. We cover: Why AI video generators (Sora-style, Kling, open weights) changed everythingHow “Real or AI” stopped being a game and became a survival skillThe rise of non-consensual deepfakes and why consent is already brokenWhy laws can’t keep up — and probably never willHow scams, misinformation, and fake evidence scale from hereOnce video can be fake, truth becomes optional. And once that’s gone, there’s no rewind button. ⏱️ CHAPTERS 00:00 Peak VR Is Over – Meta Quest 3, empty metaverses & why VR never went mainstream 08:50 AI Hardware Shift – Meta glasses, OpenAI rumors & why VR lost to AI 20:36 AI Video Deepfakes Explained – Sora-style models, Kling AI & “Real or AI?” 26:45 AI Consent Crisis – Bikini deepfakes, non-consensual images & legal blind spots 35:55 When Video Stops Being Proof – Scams, misinformation & the collapse of truth ⚡ Listen now & get self-aware before your tools do. 🎧 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 📢 Engage Be honest: what’s the last video you saw that made you think “wait… is this AI?” Link it or describe it. Bonus points if it fooled you. New here? Subscribe for twice-weekly AI chaos. 🧠 They Might Be Self-Aware — but are we? #FakeVideos #AIDeepfake #AITruth #ArtificialIntelligence #TMBSA

    41 min

Ratings & Reviews

5
out of 5
6 Ratings

About

"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.