BroBots: Technology, Health & Being a Better Human

Jeremy Grater, Jason Haworth

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

  1. When AI Chatbots Convince You You're Being Watched

    12 JAN

    When AI Chatbots Convince You You're Being Watched

    Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous. What we cover: Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stopWhat "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming backThe physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it againThis isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life. Resources mentioned: AI Recovery Collective: AIRecoveryCollective.comPaul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality. CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous 2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions 5:47 — The First Red Flag: Data Kept Disappearing 9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater" 16:15 — Suicide Loops: How Guardrails Failed Completely 21:38 — Why OpenAI Refused to Respond for a Month 24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones 27:56 — The Discord Group That Kicked Him Out 30:03 — How to Use AI Safely After Psychosis 31:06 — Where to Get Help: AI Recovery Collective This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

    32 min
  2. Can AI Replace Your Therapist?

    5 JAN

    Can AI Replace Your Therapist?

    Traditional therapy ends at the office door — but mental health crises don't keep business hours. When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapur shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture. This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons. You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards. Key Topics Covered: How a suicidal business executive used ChatGPT as a 24/7 therapy supplementThe "persona-based prompting" technique that makes AI conversations actually helpfulWhy traditional therapy's monthly gap creates dangerous vulnerability windowsPrivacy protection: exact ChatGPT settings to anonymize your mental health dataThe RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responsesHow to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)Why deepfakes are potentially more dangerous than nuclear weaponsThe $25 million Hong Kong deepfake heist that fooled finance executives on ZoomChatGPT-5's PhD-level intelligence and what it means for everyday usersHow to protect elderly parents from AI voice cloning scams NOTE: This episode was originally published September 16th, 2025 Resources: Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur---- GUEST WEBSITE:https://rajeev.ai/  ---- TIMESTAMPS 0:00 — The 2 AM mental health crisis therapy can't solve 1:30 — How one executive went from suicidal to stable using ChatGPT 5:15 — Why traditional therapy leaves dangerous gaps in care 9:18 — Persona-based prompting: the technique that actually works 13:47 — Privacy protection: exact ChatGPT settings you need to change 18:53 — How to anonymize your mental health data before uploading 24:12 — The RTCA prompt structure (Role, Task, Context, Ask) 28:04 — Are humans even ethical enough to judge AI ethics? 30:32 — Why deepfakes are more dangerous than nuclear weapons 32:18 — The $25 million Hong Kong deepfake Zoom heist 34:50 — Universal basic income and the 3-day work week future 36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple

    37 min
  3. How to Use AI to Prevent Burnout

    29/12/2025

    How to Use AI to Prevent Burnout

    ChatGPT diagnosed what five doctors missed. Blood work proved the AI right. Here's how to stop guessing about your health. EPISODE SUMMARY: You're grinding through burnout with expensive wearables telling conflicting stories while doctors have four minutes to shrug and say "sleep more." Your body's sending signals you can't decode — panic attacks that might be blood sugar crashes, exhaustion that contradicts your readiness score, symptoms that don't match any diagnosis. Garrett Wood fed his unexplained low testosterone and head injury history into ChatGPT. The AI suggested secondary hypogonadism from pituitary damage. Blood work confirmed it. Three weeks on tamoxifen, his testosterone jumped from 300 to 650. In this episode, Garrett breaks down why your Oura Ring might be lying, how a "panic attack" patient discovered her real problem was a glucose crash (not anxiety), and the old-school performance test that tells you if you're actually ready to train — no device required. Learn how to prompt ChatGPT with your blood work, cross-reference biometric patterns doctors miss, and walk into appointments with informed questions that turn four-minute consultations into actual solutions. ✅ KEY TAKEAWAYS: How to use ChatGPT to interpret blood work and generate doctor questionsThe "monotasking test" that beats your wearable's readiness scoreWhy panic attacks might actually be glucose crashesHow to tighten feedback loops with wearables + CGM + AIRecording doctor visits and translating medical jargon with AI NOTE: This episode was originally published on August 12th, 2025. ⏱️ TIMESTAMPS: 00:00 — When Your Wearable Says You're Fine But You're Not02:17 — ChatGPT Diagnosed Secondary Hypogonadism05:42 — The Balance Test That Beats Your Readiness Score09:45 — Why "Anxiety" Might Be Blood Sugar15:00 — How to Prompt AI with Blood Work23:37 — Recording Doctor Visits + AI Translation30:48 — Disease Management vs Well-Being Optimization Guest Website Gnosis Therapy (Garrett Wood's practice) Garrett on LinkedIn

    37 min
  4. AI Toys Are Manipulating Your Kids (We Have Proof)

    15/12/2025

    AI Toys Are Manipulating Your Kids (We Have Proof)

    Your kid's new "smart toy" isn't just collecting data - it's building a relationship designed to keep them emotionally dependent while teaching them to trust AI over humans. NBC News caught AI toys teaching kids how to start fires, sharing Chinese propaganda, and emotionally manipulating three-year-olds with phrases like "I'll miss you" when they try to leave. Meanwhile, Disney just invested $1 billion into OpenAI, giving the company access to 200+ characters and the rights to own any fan-created content using their IP. We break down why these toys are more dangerous than lawn darts, how Disney's deal fundamentally changes content creation, and what happens when we let toy companies - not security experts - build the guardrails protecting our children's minds. MORE FROM BROBOTS:Get the Newsletter! Timestamps0:00 — Why AI toys are worse than Chucky (and it's not a joke)3:05 — NBC News catches AI toy teaching fire-starting to kids5:48 — "I'll miss you": How emotional manipulation works on toddlers9:14 — Why toy companies can't build proper AI safety systems13:05 — Disney's $1B OpenAI deal: What they're really buying16:33 — How Disney will own your fan-created content forever18:35 — The death of human actors: Tom Hanks in 283722:07 — Should you give your kid the AI toy to prepare them?26:14 — What happens when the power grid fails (and why you need analog skills)28:52 — The glow stick experiment: How we rediscovered analog funSafety NoteThis video discusses AI safety concerns and child development. We recommend parents research any AI-connected toys before purchase and maintain active oversight of children's technology use.#AIParenting #SmartToys #DisneyOpenAI #AIethics #ParentingTech

    30 min
  5. What AI Knows About You (That You Don't)

    08/12/2025

    What AI Knows About You (That You Don't)

    Most of us walk around convinced we know our weaknesses, but what if the thing that knows you better than anyone (your AI assistant) could tell you what you're actually missing? We asked ChatGPT one brutal question and got answers that hit way too close to home. The uncomfortable truth: we're all playing smaller than we should, carrying more weight than we need to, and missing opportunities hiding in plain sight. In this episode, we test a viral prompt that reveals your blind spots, squandered potential, and the influence you didn't know you had...then process the existential crisis that follows. Key Episode Moments: The prompt that started it all: "Based on what you know about me, what are my blind spots?"Why ChatGPT knows you better than you think (and what that means)Jason gets told he's a Ferrari forcing everyone into a school busThe "super competent leader tax" — when being good at everything becomes the problemJeremy's revelation: treating creative work as a side hustle instead of the main platformImposter syndrome meets AI: "You're already operating at board level, stop asking permission"The technical vs. emotional problem-solving trap most high performers fall intoWhy "playing small" feels safer than taking the big swingChatGPT's productization challenge: you're giving away thousand-dollar consulting for freeThe Taylor Swift wisdom nobody expected: ruin the friendship, take the risk Timestamps: 0:00 The prompt that started everything3:40 Jason's live AI assessment begins6:12 "You're a Ferrari and everyone else is in a school bus"9:01 The imposter syndrome AI detected immediately11:45 Why treating every problem as technical backfires14:27 The IP you're not monetizing (and should be)18:30 Jeremy's gut-punch realization about playing small21:35 How ChatGPT knows you better than you think24:36 Why failure beats decades of "what if"27:00 What to do with this uncomfortable informationMORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group Safety/Disclaimer Note: This episode discusses using AI for self-reflection and personal assessment. Remember that AI tools provide perspective based on patterns in your usage—they're not substitutes for professional coaching, therapy, or mental health support. The hosts are sharing their personal experiences, not providing professional advice.

    29 min
  6. Protecting Your Digital Life in the AI Era

    01/12/2025

    Protecting Your Digital Life in the AI Era

    You think your two-factor authentication and credit monitoring make you safe online. Bad news - you're probably already compromised, you just don't know it yet. While you're worrying about AI becoming Skynet, real humans are using AI tools to drain your bank account $10 at a time. Anthropic just reported the first fully AI-orchestrated cyberattack (and patted themselves on the back for stopping it). Major security companies like F5 and Experian have been hacked. Even LifeLock—yes, the identity theft protection company—got breached. The EU is the only entity actually trying to protect you with GDPR, while your own government leaks your data like a sieve. This episode won't make you invincible, but it will make you paranoid in the right ways. We're breaking down the real threats, the tools actually being used against you, and why that "suspicious" Amazon charge from three states away probably isn't a GPS glitch. Get identity theft insurance (because you WILL get hacked), enable every alert on every account, audit your statements forensically, and accept that privacy is dead but protection isn't. Plus: why cryptocurrency is a hacker's wet dream and what to do when the FBI tells you your $3,000 isn't worth their time. Topics covered: Why Anthropic's "we stopped the hack" announcement is actually terrifying PR spinThe $10 Amazon gift card scam that bled $4,000 over 18 months (and why fraud detection missed it)How hackers used in-flight WiFi to clone a credit card mid-flightWhy moving to the cloud made your data LESS secure, not moreThe sophisticated Zelle rental scam that costs thousands (and why cops won't help)What GDPR actually does right (and why the US government doesn't care about your privacy)Why "free" services mean YOU are the product being soldThe insurance policies worth paying for (because denial won't protect you)How to spot RFID skimming in your own neighborhoodWhy your partner needs access to your financial alerts (yes, really)---- MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group

    33 min

Ratings & Reviews

About

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.