Code & Cure

Vasanth Sarathy & Laura Hagopian

Decoding health in the age of AI Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds. Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven. If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you. We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.

  1. 1D AGO

    #27 - Sleep’s Hidden Forecast

    What if one night in a sleep lab could offer a glimpse into your long-term health? Researchers are now using a foundation model trained on hundreds of thousands of hours of sleep data to do just that, by predicting the next five seconds of a polysomnogram, the model learns the rhythms of sleep and, with minimal fine-tuning, begins estimating risks for conditions like Parkinson’s, dementia, heart failure, stroke, and even some cancers. We break down how it works: during a sleep study, sensors capture brain waves (EEG), eye movements (EOG), muscle tone (EMG), heart rhythms (ECG), and breathing. The model compresses these multimodal signals into a reusable format, much like how language models process text. Add a small neural network, and suddenly those sleep signals can help predict disease risk up to six years out. The associations make clinical sense: EEG patterns are more telling for neurodegeneration, respiratory signals flag pulmonary issues, and cardiac rhythms hint at circulatory problems. But, the scale of what’s possible from a single night’s data is remarkable. We also tackle the practical and ethical questions. Since sleep lab patients aren’t always representative of the general population, we explore issues of selection bias, fairness, and external validation. Could this model eventually work with consumer wearables that capture less data but do so every night? And what should patients be told when risk estimates are uncertain or only partially actionable? If you're interested in sleep science, AI in healthcare, or the delicate balance of early detection and patient anxiety, this episode offers a thoughtful look at what the future might hold—and the trade-offs we’ll face along the way. Reference:  A multimodal sleep foundation model for disease prediction Rahul Thapa Nature (2026) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    24 min
  2. JAN 8

    #26 - How Your Phone Keyboard Signals Your State Of Mind

    What if your keyboard could reveal your mental health? Emerging research suggests that how you type—not what you type—could signal early signs of depression. By analyzing keystroke patterns like speed, timing, pauses, and autocorrect use, researchers are exploring digital biomarkers that might quietly reflect changes in mood. In this episode, we break down how this passive tracking compares to traditional screening tools like the PHQ. While questionnaires offer valuable insight, they rely on memory and reflect isolated moments. In contrast, continuous keystroke monitoring captures real-world behaviors—faster typing, more pauses, shorter sessions, and increased autocorrect usage—all patterns linked to mood shifts, especially when anxiety overlaps with depression. We discuss the practical questions this raises: How do we account for personal baselines and confounding factors like time of day or age? What’s the difference between correlation and causation? And how can we design systems that protect privacy while still offering clinical value? From privacy-preserving on-device processing to broader behavioral signals like sleep and movement, this conversation explores how digital phenotyping might help detect depression earlier—and more gently. If you're curious about AI in healthcare, behavioral science, or the ethics of digital mental health tools, this episode lays out both the potential and the caution needed. Reference:  Effects of mood and aging on keystroke dynamics metadata and their diurnal patterns in a large open-science sample: A BiAffect iOS study Claudia Vesel et al. J Am Med Inform Assoc (2020) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    20 min
  3. JAN 1

    #25 - When Safety Slips: Prompt Injection in Healthcare AI

    What happens when a chatbot follows the wrong voice in the room? In this episode, we explore the hidden vulnerabilities of prompt injection, where malicious instructions and fake signals can mislead even the most advanced AI into offering harmful medical advice. We unpack a recent study that simulated real patient conversations, subtly injecting cues that steered the AI to make dangerous recommendations—including prescribing thalidomide for pregnancy nausea, a catastrophic lapse in medical judgment. Why does this happen? Because language models aim to be helpful within their given context, not necessarily to prioritize authoritative or safe advice. When a browser plug-in, a tainted PDF, or a retrieved web page contains hidden instructions, those can become the model’s new directive, undermining guardrails and safety layers. From direct “ignore previous instructions” overrides to obfuscated cues in code or emotionally charged context nudges, we map the many forms of this attack surface. We contrast these prompt injections with hallucinations, examine how alignment and preference training can unintentionally amplify risks, and highlight why current defenses, like content filters or system prompts, often fall short in clinical use. Then, we get practical. For AI developers: establish strict instruction boundaries, sanitize external inputs, enforce least-privilege access to tools, and prioritize adversarial testing in medical settings. For clinicians and patients: treat AI as a research companion, insist on credible sources, and always confirm drug advice with licensed professionals. AI in healthcare doesn’t need to be flawless, but it must be trustworthy. If you’re invested in digital health safety, this episode offers a clear-eyed look at where things can go wrong and how to build stronger, safer systems. If you found it valuable, follow the show, share it with a colleague, and leave a quick review to help others discover it. Reference:  Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice Ro Woon Lee JAMA Open Health Informatics (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    25 min
  4. 12/25/2025

    #24 - What Else Is Hiding In Medical Images?

    What if a routine mammogram could do more than screen for breast cancer? What if that same image could quietly reveal a woman’s future risk of heart disease—without extra tests, appointments, or burden on patients? In this episode, we explore a large-scale study that uses deep learning to uncover cardiovascular risk hidden inside standard breast imaging. By analyzing mammograms that millions of women already receive, researchers show how a single scan can deliver a powerful second insight for women’s health. Laura brings the clinical perspective, unpacking how cardiovascular risk actually shows up in practice—from atypical symptoms to prevention decisions—while Vasanth walks us through the AI system that makes this dual-purpose screening possible. We begin with the basics: how traditional cardiovascular risk tools like PREVENT work, what data they depend on, and why—despite their proven value—they’re often underused in real-world care. From there, we turn to the mammogram itself. Features such as breast arterial calcifications and subtle tissue patterns have long been linked to vascular disease, but this approach goes further. Instead of focusing on a handful of predefined markers, the model learns from the entire image combined with age, identifying patterns that humans might never think to look for. Under the hood is a survival modeling framework designed for clinical reality, where not every patient experiences an event during follow-up, yet every data point still matters. The takeaway is striking: the imaging-based risk score performs on par with established clinical tools. That means clinicians could flag cardiovascular risk during a test patients are already getting—opening the door to earlier conversations about blood pressure, cholesterol, diabetes, and lifestyle changes. We also zoom out to the bigger picture. If mammograms can double as heart-risk detectors, what other routine tests are carrying untapped signals? Retinal images, chest CTs, pathology slides—each may hold clues far beyond their original purpose. With careful validation and attention to bias, this kind of opportunistic screening could expand access to prevention and shift care further upstream. If this episode got you thinking, share it with a colleague, subscribe for more conversations at the intersection of AI and medicine, and leave a review telling us which everyday medical test you think deserves a second life. Reference:  Predicting cardiovascular events from routine mammograms using machine learning Jennifer Yvonne Barraclough Heart (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    24 min
  5. 12/18/2025

    #23 - Designing Antivenom With Diffusion Models

    What if the future of antivenom didn’t come from horse serum, but from AI models that shape lifesaving proteins out of noise? In this episode, we explore how diffusion models, powerful tools from the world of AI, are transforming the design of antivenoms, particularly for some of nature’s deadliest neurotoxins. Traditional antivenom is costly, unstable, and can provoke serious immune reactions. But for toxins like those from cobras, mambas, and sea snakes that are potent yet hard to target with immune responses, new strategies are needed. We begin with the problem: clinicians face high-risk toxins and a shortage of effective, safe treatments. Then we dive into the breakthrough: using diffusion models like RosettaFold Diffusion to generate novel protein binders that precisely fit the structure of snake toxins. These models start with random shapes and iteratively refine them into stable, functional proteins, tailored to neutralize the threat at the molecular level. You’ll hear how these designs were screened for strength, specificity, and stability, and how the top candidates performed in mouse studies—protecting respiration and holding promise for more scalable, less reactive therapies. Beyond venom, this approach hints at a broader shift in drug development: one where AI accelerates discovery by reasoning in shape, not just sequence. We wrap by looking ahead at the challenges in manufacturing, regulation, and real-world validation, and why this shape-first design mindset could unlock new frontiers in precision medicine. If you’re into biotech with real-world impact, subscribe, share, and leave a review to help more curious listeners discover the show. Reference:  Novel Proteins to Neutralize Venom Toxins José María Gutiérrez New England Journal of Medicine (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    21 min
  6. 12/11/2025

    #22 - Hope, Help, and the Language We Choose

    What if the words we use could tip the balance between seeking help and staying silent? In this episode, we explore a fascinating study that compares top-voted Reddit responses with replies generated by large language models (LLMs) to uncover which better reduces stigma around opioid use disorder—and why that distinction matters. Drawing from Laura’s on-the-ground ER experience and Vasanth’s research on language and moderation, we examine how subtle shifts, like saying “addict” versus “person with OUD, ” can reshape beliefs, impact treatment, and even inform policy. The study zeroes in on three kinds of stigma: skepticism toward medications like Suboxone and methadone, biases against people with OUD, and doubts about the possibility of recovery. Surprisingly, even with minimal prompting, LLM responses often came across as more supportive, hopeful, and factually accurate. We walk through real examples where personal anecdotes, though well-intended, unintentionally reinforced harmful myths—while AI replies used precise, compassionate language to challenge stigma and foster trust. But this isn’t a story about AI hype. It’s about how moderation works in online communities, why tone and pronouns matter, and how transparency is key. The takeaway? Language is infrastructure. With thoughtful design and human oversight, AI can help create safer digital spaces, lower barriers to care, and make it easier for people to ask for help, without fear. If this conversation sparks something for you, follow the show, share it with someone who cares about public health or ethical tech, and leave us a review. Your voice shapes this space: what kind of language do you want to see more of? Reference:  Exposure to content written by large language models can reduce stigma around opioid use disorder Shravika Mittal et al. npj Artificial Intelligence (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    25 min
  7. 12/04/2025

    #21 - The Rural Reality Check for AI

    How can AI-powered care truly serve rural communities? It’s not just about the latest tech, it’s about what works in places where internet can drop, distances are long, and people often underplay symptoms to avoid making a fuss. In this episode, we explore what it takes for AI in healthcare to earn trust and deliver real value beyond city limits. From wearables that miss the mark on weak broadband to triage tools that misjudge urgency, we reveal how well-meaning innovations can falter in rural settings. Through four key use cases—predictive monitoring, triage, conversational support, and caregiver assistance—we examine the subtle ways systems fail: false positives, alarm fatigue, and models trained on data that doesn’t reflect rural realities. But it’s not just a tech problem—it’s a people story. We highlight the importance of offline-first designs, region-specific audits, and data that mirrors local language and norms. When AI tools are built with communities in mind, they don’t just alert—they support. Nurses can follow up. Caregivers can act. Patients can trust the system. With the right approach, AI won’t replace relationships—it’ll reinforce them. And when local teams, family members, and clinicians are all on the same page, care doesn’t just reach further. It gets better. Subscribe for more grounded conversations on health, AI, and care that works. And if this episode resonated, share it with someone building tech for real people—and leave a review to help others find the show. Reference:  From Bandwidth to Bedside — Bringing AI-Enabled Care to Rural America Angelo E. Volandes et al. New England Journal of Medicine (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    20 min
  8. 11/27/2025

    #20 - Google Translate Walked Into An ER And Got A Reality Check

    What if your discharge instructions were written in a language you couldn’t read? For millions of patients, that’s not a hypothetical, but a safety risk. And at 2 a.m. in a busy hospital, translation isn’t just a convenience; it’s clinical care. In this episode, we explore how AI can bridge the language gap in discharge instructions: what it does well, where it stumbles, and how to build workflows that support clinicians without slowing them down. We unpack what these instructions really include: condition education, medication details, warning signs, and follow-up steps, all of which need to be clear, accurate, and culturally appropriate. We trace the evolution of translation tools, from early rule-based systems to today’s large language models (LLMs), unpacking the transformer breakthrough that made flexible, context-aware translation possible. While small, domain-specific models offer speed and predictability, LLMs excel at simplifying jargon and adjusting tone. But they bring risks like hallucinations and slower response times. A recent study adds a real-world perspective by comparing human and AI translations across Spanish, Chinese, Somali, and Vietnamese. The takeaway? Quality tracks with data availability: strongest for high-resource languages like Spanish, and weaker where training data is sparse. We also explore critical nuances that AI may miss: cultural context, politeness norms, and the role of family in decision-making. So what’s working now? A hybrid approach. Think pre-approved multilingual instruction libraries, AI models tuned for clinical language, and human oversight to ensure clarity, completeness, and cultural fit. For rare languages or off-hours, AI support with clear thresholds for interpreter review can extend access while maintaining safety. If this topic hits home, follow the show, share with a colleague, and leave a review with your biggest question about AI and clinical communication. Your insights help shape safer, smarter care for everyone. Reference:  Accuracy of Artificial Intelligence vs Professionally Translated Discharge Instructions Melissa Martos, et al.  JAMA Network Open (2025) Credits:  Theme music: Nowhere Land, Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 4.0 https://creativecommons.org/licenses/by/4.0/

    32 min
5
out of 5
4 Ratings

About

Decoding health in the age of AI Hosted by an AI researcher and a medical doctor, this podcast unpacks how artificial intelligence and emerging technologies are transforming how we understand, measure, and care for our bodies and minds. Each episode unpacks a real-world topic to ask not just what’s new, but what’s true—and what’s at stake as healthcare becomes increasingly data-driven. If you're curious about how health tech really works—and what it means for your body, your choices, and your future—this podcast is for you. We’re here to explore ideas—not to diagnose or treat. This podcast doesn’t provide medical advice.