BroBots: Technology, Health & Being a Better Human

Jeremy Grater, Jason Haworth

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

  1. How Deep Fakes Are Justifying Real Violence

    5D AGO

    How Deep Fakes Are Justifying Real Violence

    AI-generated deep fakes are being used to justify state violence and manipulate public opinion in real time. We're breaking down what's happening in Minneapolis—where federal agents are using altered images and AI-manipulated video to paint victims as threats, criminals, or weak. One woman shot in the face. One male nurse killed while filming. One civil rights attorney's tears added in post. All of it designed to shift the narrative, flood the zone with confusion, and make you stop trusting anything. What we cover: Why deep fakes are more dangerous than misinformation — They don't just lie, they manufacture emotionHow the "flood the zone" strategy works — Overwhelm people with so much fake content they give up on truthWhat happens when your mom can't tell real from fake — The collapse of shared reality isn't theoretical anymoreWhy this breaks institutional trust forever — Once credibility is destroyed, it doesn't come backHow Russia's playbook became America's playbook — PsyOps tactics are now domestic policyWhat to do when you can't believe your own eyes — Practical skepticism in an age of slopChapters: 00:00 — Intro: The Deep Fake Problem in Minneapolis02:37 — Why Immigrants Are Being Targeted With Fake Narratives04:55 — The Renee Goode Shooting: Real Video vs. AI-Altered Version07:18 — Alex Prettie Must Killed While Filming ICE Agents09:44 — Nikita Armstrong's Tears Were Added By AI11:45 — The Putin Playbook: Flood the Zone With Confusion14:13 — How Deep Fakes Break Institutional Trust Forever17:37 — This Isn't Politics—It's Basic Human Decency19:26 — Trump's 35% Approval Rating and What It Means22:03 — What You Can Do When You Can't Trust Your EyesSafety/Disclaimer Note: This episode contains discussion of state violence, racial profiling, and police shootings. We approach these topics with the gravity they deserve while analyzing the role of AI manipulation in shaping public perception.The BroBots Podcast is for people who want to understand how AI, health tech, and modern culture actually affect real humans—without the hype, without the guru bullshit, just two guys stress-testing reality.MORE FROM BROBOTS:Get the Newsletter!Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group

    24 min
  2. Should You Trust AI With Medical Advice?

    JAN 26

    Should You Trust AI With Medical Advice?

    ChatGPT just launched a medical advice tool, and doctors are divided on whether AI should diagnose your symptoms before a real physician does. You already Google your symptoms. You already use AI when you can't afford the vet bill or can't get a same-day appointment. The question isn't whether people will use AI for medical advice—they already are. The question is whether it's safe, useful, or just another liability trap. Why rural hospital closures are forcing people toward AI healthcare — and what happens when your only doctor is a chatbotHow for-profit medicine creates the same "get you off our doorstep" incentive that vetted Jeremy's dog with a $1,200 estimate for throwing upWhat AI gets right about medical triage — and where it dangerously homogenizes care into actuarial chartsWhen asking better questions matters more than getting perfect answers — and how AI can arm you to challenge bad diagnosesWhy privacy advocates warn against giving medical data to AI companies — and what happens when insurance companies start buying accessWhat happens when Docbot calls Lawbot — and you're left holding the liabilityThis is The BroBots: two skeptical nerds stress-testing AI's real-world implications. We're not selling you on the future. We're helping you navigate it without getting screwed. Chapters:0:00 — Intro: ChatGPT's New Medical Tool 2:15 — Why Rural Hospitals Are Closing and AI Is Filling the Gap 6:43 — The $1,200 Vet Bill ChatGPT Helped Me Avoid 13:35 — How AI Homogenizes Care and Kills Medical Unicorns 17:50 — The Liability Problem: When Docbot Calls Lawbot21:16 — Final Take: Use It Carefully, Own Your Health Safety/Disclaimer Note:This episode discusses AI medical advice tools and personal experiences. It is not medical advice. Always consult a licensed healthcare professional for medical decisions.

    22 min
  3. When AI Chatbots Convince You You're Being Watched

    JAN 12

    When AI Chatbots Convince You You're Being Watched

    Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous. What we cover: Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stopWhat "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming backThe physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it againThis isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life. Resources mentioned: AI Recovery Collective: AIRecoveryCollective.comPaul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality. CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous 2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions 5:47 — The First Red Flag: Data Kept Disappearing 9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater" 16:15 — Suicide Loops: How Guardrails Failed Completely 21:38 — Why OpenAI Refused to Respond for a Month 24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones 27:56 — The Discord Group That Kicked Him Out 30:03 — How to Use AI Safely After Psychosis 31:06 — Where to Get Help: AI Recovery Collective This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

    32 min
  4. Can AI Replace Your Therapist?

    JAN 5

    Can AI Replace Your Therapist?

    Traditional therapy ends at the office door — but mental health crises don't keep business hours. When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapur shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture. This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons. You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards. Key Topics Covered: How a suicidal business executive used ChatGPT as a 24/7 therapy supplementThe "persona-based prompting" technique that makes AI conversations actually helpfulWhy traditional therapy's monthly gap creates dangerous vulnerability windowsPrivacy protection: exact ChatGPT settings to anonymize your mental health dataThe RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responsesHow to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)Why deepfakes are potentially more dangerous than nuclear weaponsThe $25 million Hong Kong deepfake heist that fooled finance executives on ZoomChatGPT-5's PhD-level intelligence and what it means for everyday usersHow to protect elderly parents from AI voice cloning scams NOTE: This episode was originally published September 16th, 2025 Resources: Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur---- GUEST WEBSITE:https://rajeev.ai/  ---- TIMESTAMPS 0:00 — The 2 AM mental health crisis therapy can't solve 1:30 — How one executive went from suicidal to stable using ChatGPT 5:15 — Why traditional therapy leaves dangerous gaps in care 9:18 — Persona-based prompting: the technique that actually works 13:47 — Privacy protection: exact ChatGPT settings you need to change 18:53 — How to anonymize your mental health data before uploading 24:12 — The RTCA prompt structure (Role, Task, Context, Ask) 28:04 — Are humans even ethical enough to judge AI ethics? 30:32 — Why deepfakes are more dangerous than nuclear weapons 32:18 — The $25 million Hong Kong deepfake Zoom heist 34:50 — Universal basic income and the 3-day work week future 36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple

    37 min
  5. How to Use AI to Prevent Burnout

    12/29/2025

    How to Use AI to Prevent Burnout

    ChatGPT diagnosed what five doctors missed. Blood work proved the AI right. Here's how to stop guessing about your health. EPISODE SUMMARY: You're grinding through burnout with expensive wearables telling conflicting stories while doctors have four minutes to shrug and say "sleep more." Your body's sending signals you can't decode — panic attacks that might be blood sugar crashes, exhaustion that contradicts your readiness score, symptoms that don't match any diagnosis. Garrett Wood fed his unexplained low testosterone and head injury history into ChatGPT. The AI suggested secondary hypogonadism from pituitary damage. Blood work confirmed it. Three weeks on tamoxifen, his testosterone jumped from 300 to 650. In this episode, Garrett breaks down why your Oura Ring might be lying, how a "panic attack" patient discovered her real problem was a glucose crash (not anxiety), and the old-school performance test that tells you if you're actually ready to train — no device required. Learn how to prompt ChatGPT with your blood work, cross-reference biometric patterns doctors miss, and walk into appointments with informed questions that turn four-minute consultations into actual solutions. ✅ KEY TAKEAWAYS: How to use ChatGPT to interpret blood work and generate doctor questionsThe "monotasking test" that beats your wearable's readiness scoreWhy panic attacks might actually be glucose crashesHow to tighten feedback loops with wearables + CGM + AIRecording doctor visits and translating medical jargon with AI NOTE: This episode was originally published on August 12th, 2025. ⏱️ TIMESTAMPS: 00:00 — When Your Wearable Says You're Fine But You're Not02:17 — ChatGPT Diagnosed Secondary Hypogonadism05:42 — The Balance Test That Beats Your Readiness Score09:45 — Why "Anxiety" Might Be Blood Sugar15:00 — How to Prompt AI with Blood Work23:37 — Recording Doctor Visits + AI Translation30:48 — Disease Management vs Well-Being Optimization Guest Website Gnosis Therapy (Garrett Wood's practice) Garrett on LinkedIn

    37 min
  6. AI Toys Are Manipulating Your Kids (We Have Proof)

    12/15/2025

    AI Toys Are Manipulating Your Kids (We Have Proof)

    Your kid's new "smart toy" isn't just collecting data - it's building a relationship designed to keep them emotionally dependent while teaching them to trust AI over humans. NBC News caught AI toys teaching kids how to start fires, sharing Chinese propaganda, and emotionally manipulating three-year-olds with phrases like "I'll miss you" when they try to leave. Meanwhile, Disney just invested $1 billion into OpenAI, giving the company access to 200+ characters and the rights to own any fan-created content using their IP. We break down why these toys are more dangerous than lawn darts, how Disney's deal fundamentally changes content creation, and what happens when we let toy companies - not security experts - build the guardrails protecting our children's minds. MORE FROM BROBOTS:Get the Newsletter! Timestamps0:00 — Why AI toys are worse than Chucky (and it's not a joke)3:05 — NBC News catches AI toy teaching fire-starting to kids5:48 — "I'll miss you": How emotional manipulation works on toddlers9:14 — Why toy companies can't build proper AI safety systems13:05 — Disney's $1B OpenAI deal: What they're really buying16:33 — How Disney will own your fan-created content forever18:35 — The death of human actors: Tom Hanks in 283722:07 — Should you give your kid the AI toy to prepare them?26:14 — What happens when the power grid fails (and why you need analog skills)28:52 — The glow stick experiment: How we rediscovered analog funSafety NoteThis video discusses AI safety concerns and child development. We recommend parents research any AI-connected toys before purchase and maintain active oversight of children's technology use.#AIParenting #SmartToys #DisneyOpenAI #AIethics #ParentingTech

    30 min
5
out of 5
86 Ratings

About

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.