BroBots: Technology, Health & Being a Better Human

Jeremy Grater, Jason Haworth

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything. Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

  1. Can an AI Tool Actually Break Your Doom Scrolling Habit?

    1D AGO

    Can an AI Tool Actually Break Your Doom Scrolling Habit?

    If you’re reading this on your phone while avoiding something else, congratulations — you are the product. This episode started as a conversation about No Scroll, an AI tool that promises to filter your social media feed so you only see the good stuff. It turned into something more honest: a reckoning with why these platforms exist, why every fix we try doesn’t work, and whether AI tools — including No Scroll, including ChatGPT, including everything we’re told will save us — are running the same playbook Facebook ran in 2009. Jason and Jeremy don’t have a clean answer. But they have a really good metaphor involving methadone and nicotine patches. Key Moments 00:00 — No Scroll reviewed: AI that doom scrolls so you don’t have to01:28 — Twitter as a cesspool with gold nuggets: Jason’s defense of the tool03:10 — Jeremy’s alcoholic analogy: why paying a robot to drink your booze isn’t sobriety04:07 — The nicotine patch theory: harm reduction vs. actual behavior change07:01 — Inshidification: the cycle that turns every useful platform into a garbage pile08:32 — Jason’s internet history lesson: from ARPANET to walled gardens to AI11:20 — How AI companies are repeating the Facebook model: hook, rely, monetize14:50 — ‘You are the product’ — and you’re also a sucker for believing it’s changed17:16 — Jeremy’s prediction: AI is going to make the internet boring and we’ll still watch it25:54 — AI productivity paradox: Jeremy is more efficient than ever, companies are flat27:00 — What people actually do with saved time (spoiler: not more work)27:58 — The 10/90 rule: 10% of people do 90% of the work, AI or not

    29 min
  2. Why AI Propaganda Works—and How to Resist It

    APR 20

    Why AI Propaganda Works—and How to Resist It

    Iran has a 10-person animation team making Lego-style propaganda videos with hip hop beats that are going viral — and Jeremy, who considers himself reasonably good at detecting BS online, almost shared one before he caught himself. In this episode, Jeremy and Jason dissect how AI-powered slopaganda works: why it's engineered to exploit emotional familiarity, why YouTube is selectively banning it while leaving comparably political domestic content untouched, and what it means when even skeptical, media-literate adults are one tap away from becoming unwilling distribution nodes. If you've ever watched something that felt like news but moved like entertainment and had a nagging feeling you were being played — this conversation names what happened. Key Moments00:00 — Jeremy discovers Iranian Lego propaganda videos and almost shares them before catching himself01:30 — Jason confirms he's seen them: why YouTube's ban is inconsistent and what it actually signals02:42 — The 'slopaganda wars': how the format compresses political narrative into an irresistible two-minute package05:12 — The Daily Show comparison: why source legitimacy changes how propaganda lands, not just the content07:16 — How Lego nostalgia and great music are doing the persuasion work before the message even registers08:12 — YouTube's stated reason vs. the actual reason: 'spam and scams' as a cover for political compliance13:33 — Jason on Netanyahu, Epstein files, and why the videos' specific claims are getting suppressed15:25 — YouTube as a business making political bets, not a neutral content moderator16:21 — Jeremy on Canada's social media bans for minors — and why this episode made him understand the urgency19:01 — The media literacy takeaway: what to ask yourself before you hit share

    17 min
  3. AI Just Built a Cyberweapon. Is Anyone Ready?

    APR 13

    AI Just Built a Cyberweapon. Is Anyone Ready?

    Anthropic's new Mythos model didn't just get better at writing code — it got better at breaking it. In an hour, an AI mapped decades of hidden vulnerabilities across live systems. In four hours, a supply chain attack silently exfiltrated 500,000 credentials and compromised 20,000 repositories. The question isn't whether this is alarming. It's whether the companies and governments responsible for protecting critical infrastructure — water, power, gas — are anywhere close to ready. On this episode of The BroBots, Jeremy and Jason work through what Anthropic's internal memo actually said, what a cyberweapon-grade AI changes about the attack surface, and why Jason thinks the survivalists have been right all along. Key Moments 00:00 — Anthropic's Mythos: what the internal memo actually said and why it's different01:38 — The LiteLLM supply chain attack: how 500,000 credentials were stolen in 4 hours04:27 — Zero-day attacks explained: why signature-based detection can't stop what it hasn't seen06:44 — Mythos vs. prior models: from 60s to 77–78% effectiveness — what that jump means09:34 — Jeremy tries to find the optimism: Glasswing, the $100M security head start11:06 — The real threat: why utilities and infrastructure are the soft targets13:21 — Regulation vs. arms race: should billionaire AI companies have a leash?15:13 — The nuclear analogy: what a global AI treaty would actually require17:09 — 'Easy mode': the counterargument that Mythos's test conditions were unrealistic20:22 — Jason's actual survival advice: fire, water, neighbors, and a CRT in the atticFollow Brobots: www.brobots.me/follow

    26 min
  4. Why AI Won't Just Take Your Job — It'll Take Your Boss Too

    APR 6

    Why AI Won't Just Take Your Job — It'll Take Your Boss Too

    Fifteen percent of workers say they'd be fine with an AI boss. Meanwhile, thirty percent of March's sixty thousand US layoffs are being blamed directly on AI — and most of those jobs were in tech, the sector that built the tools doing the replacing. Jeremy and Jason sit with the uncomfortable logic of where this all leads: a capitalism that's optimizing so hard for efficiency that it's burning the workforce it depends on. No guests, no protocol. Just two guys who've been around long enough to remember when this job was supposed to be a career, and who aren't sure 'adapt' is the answer anymore. Get the Newsletter! Key Moments 00:00 — The AI boss survey: 15% say they'd accept a robot manager — and why that number reveals more about human managers than AI02:41 — Why 'the boss function' doesn't feel fully human to most employees anyway04:01 — Jason's case that employers are trying to replace everyone, not just management05:31 — The outsourcing pattern: from Asia to AI — it's the same playbook, accelerated09:39 — The 60,000 March layoffs: 18,000 attributed to AI, mostly in tech — the people who built the tools11:01 — Silent quitting, AI monitoring, and how the three-month detection window just collapsed12:28 — The signal-to-noise problem: collective apathy and why people can't find the action step13:37 — Jason's reframe: the system isn't against you. It just doesn't see you as a threat anymore.16:52 — The generational split: why kids who grew up through 9/11, COVID, and two financial crises don't flinch at gig economy chaos18:47 — Anthropic's weapons refusal and the autonomous killing machine pipeline: from digital infrastructure to meat space21:17 — Jeremy's optimism thread — and why Jason thinks we keep handing wiffle ball bats to toddlers

    28 min
  5. How AI Can See Heart Disease Coming Before It Kills You

    MAR 16

    How AI Can See Heart Disease Coming Before It Kills You

    Heart disease kills one person every 40 seconds. That number hasn’t changed in 30 years. Dr. John Osborne, a preventive cardiologist with two doctorates and 29 years in practice, has spent his career on a single question: why do we screen for cancers that kill a few percent of us and do nothing for the disease that kills 40%? In this episode, Jeremy and Jason sit down with Dr. Osborne to get the real story on cardiac CT with AI — the imaging technology that can detect, quantify, and track arterial plaque at sub-millimeter resolution, years before symptoms appear. If you track your bloodwork, wear a fitness device, or consider yourself health-forward — this is the conversation that fills the gap nobody warned you about. Guest Link: https://clearcardio.com/ Key Moments: 00:00 — Dr. Osborne’s case for preventive cardiology: why heart disease is the most under-screened killer02:43 — How cardiac CT evolved from "iPhone 0.5" to the 2026-era AI-powered tool he uses today05:35 — Why he gave up stress tests and heart caths in 2005 and never looked back08:16 — What AI actually adds: seeing and quantifying plaque invisible to the human eye, down to 0.1 cubic millimeters10:13 — When insurance pays for cardiac CT — and when it doesn’t (the preventive gray zone)14:50 — The “cardiac colonoscopy” concept: the case for screening before symptoms, not after18:11 — Coronary artery calcium score: the accessible $100 starting point, and what it can and can’t tell you31:54 — Lifestyle essentials: the 50% of risk that’s modifiable regardless of genetics35:00 — Family history decoded: why your sibling’s heart history matters more than your parents’36:12 — Nicotine myth-busting: Dr. Osborne on the "health guru" nicotine fad and why he thinks it’s dangerous38:05 — Supplements under scrutiny: natokinase, fish oil, red yeast rice — what the actual RCT data says

    47 min
  6. The Real Risk of Trusting AI With Your Health Decisions

    MAR 9

    The Real Risk of Trusting AI With Your Health Decisions

    The internet taught everyone to self-diagnose. AI made it faster, more persuasive, and significantly more dangerous. Dr. Ajit Barron-Dhillon — ER physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because 'the internet said so' — joins Jason to talk about what AI-assisted health research actually does to people who think they're being smart about it. The conversation covers confirmation bias in clinical settings, supplement stacks optimized by ChatGPT, the cheerleader problem in medical AI, and why being above-average intelligent with these tools may make you more vulnerable, not less. If you use AI or Google to research your health, this conversation is specifically for you. Topics Discussed Why AI self-diagnosis is dangerous specifically for informed, health-conscious peopleWhat ER physicians are actually seeing when patients arrive with internet-sourced diagnosesHow confirmation bias turns AI research into an expensive form of being wrongWhen AI-assisted supplement optimization is useful — and when it's notWhy peer-reviewed research and AI training data are not the same thingWhat a responsible approach to AI health research actually looks like CHAPTERS 0:00 — Jeremy's Intro: Sick and Googling While Hosting an AI Health Episode1:17 — Kids Unplugging: Why In-Person Dating Is the New Counterculture2:40 — The No-Wi-Fi Coffee Shop and What the Internet Can't Tell You9:47 — I Let ChatGPT Optimize My Supplement Stack. Here's What Happened.11:59 — The Telemedicine Loophole: AI + Social Engineering for Prescriptions14:25 — Why Your Doctor Doesn't Know What You're Supplementing20:16 — NIH PubMed Is Being Scrubbed — and Why That Matters28:40 — She's Not Fighting Logic. She's Fighting Belief.32:58 — Star Trek, Dr. McCoy, and the Tricorder We're Almost Building37:11 — What a PubMed-Only AI Would Actually Look Like44:58 — The Tool Gets You 80% There. The Human Closes the Gap.

    49 min
  7. When AI Becomes a Weapon: The Government Deal Anthropic Refused

    MAR 3

    When AI Becomes a Weapon: The Government Deal Anthropic Refused

    The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no. That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built. Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them. Topics Discussed: Why autonomous AI weapons systems default to nuclear launch in virtually every war game simulationWhat Anthropic's Claude can actually do — and why the US government wants it so badlyHow AI turns existing NSA surveillance infrastructure into something exponentially more dangerousWhy Open AI and Elon Musk said yes to the same deal Anthropic refusedWhy the people most confident they're using AI as a tool might be the ones AI ends up using Chapters 0:00 — When AI Meets War: What We're Actually Talking About1:15 — What Claude Can Really Do (And Why the Government Wants It)4:18 — The Autonomous Cyber Weapon Problem5:28 — Why Anthropic Said No to the Money6:26 — Mass Surveillance, AI, and What's Already Running9:45 — When War Games Go Nuclear: The 95% Problem13:01 — AGI Is Already Here. We Just Didn't Call It That.17:33 — Why Anthropic's Refusal Might Be Their Smartest Business Move22:06 — Who's Actually Using WhomMORE FROM BROBOTS: Get the Newsletter!

    26 min
  8. Using AI to Work Through Anxiety: Does It Actually Help?

    MAR 2

    Using AI to Work Through Anxiety: Does It Actually Help?

    Most people using AI for anxiety aren't following a protocol — they stumbled into it. Emma Klint, a writer and Substack creator, accidentally discovered she was doing exposure therapy by typing 'I don't know' over and over into an AI chat window. In this episode, Jeremy and Jason sit down with Emma to stress-test what AI-assisted self-reflection actually looks like: the real benefits, the obvious limits, and the uncomfortable question of whether outsourcing your feelings is the same thing as actually feeling them. If you've wondered whether talking to a robot about your problems is legitimate or just avoidance with extra steps — this conversation will give you a clearer answer. Guest website: (Over)thinking Out Loud - Emma Klint Topics discussed: Why using AI for anxiety isn't the same thing as outsourcing your feelingsHow one writer accidentally discovered she was doing exposure therapy in her chat windowWhat makes AI different from journaling — and why that difference matters for anxious brainsWhen AI mental health use helps, and when it's just avoidance with extra stepsWhy neurodivergent people may be getting the most out of these conversationsHow to tell the difference between AI that's helping you think and AI that's just telling you what you want to hearChapters: 0:00 — The 2AM Chatbot Question: Is This Therapy or Avoidance?0:42 — Using AI for Anxiety: What We're Actually Testing3:04 — The Judgment-Free Space: Why 'I Don't Know' Changes Things5:01 — AI as a Journal That Writes Back9:23 — Is the Advice Good, or Is Naming the Feeling Enough?11:00 — When AI Tries to Be Blunt (And Still Fails)13:00 — Why Prompt Engineering Is Already Outdated for This15:50 — ADHD, Neurodivergence, and Why AI Might Be the Real Unlock18:18 — Outsourcing vs. Externalizing: The Line That Matters MORE FROM BROBOTS: Get the Newsletter!

    21 min
5
out of 5
86 Ratings

About

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything. Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

You Might Also Like