The Signal Room | AI Strategy, Ethical AI & Regulation

Chris Hutchins | Healthcare AI Strategy, Readiness & Governance

Healthcare AI leadership, ethics, and LLM strategy—hosted by Chris Hutchins. The Signal Room explores how healthcare leaders, data executives, and innovators navigate AI readiness, governance, and real-world implementation. Through authentic conversations, the show surfaces the signals that matter at the intersection of healthcare ethics, large language models (LLMs), and executive decision-making. 

  1. 5D AGO

    Caregivers as the Connective Tissue of Healthcare Innovation | Amanda Roser

    Send us Fan Mail Healthcare innovation and leadership require understanding who actually coordinates care across complex systems. Behind every treatment plan is a caregiver doing the work no one else sees. Parents navigating rare disease care often become the organizers, translators, and connectors holding the healthcare system together. They track symptoms, manage appointments, translate medical language, and bridge communication between specialists who may never speak directly to one another. In this episode of The Signal Room, Chris Hutchins sits down with Amanda Roser, Vice President of Marketing at Social Strategy1, Head of Marketing for Ketotic Hypoglycemia International, and parent advocate navigating rare disease care with her son. They discuss: • The hidden operational role caregivers play in healthcare • The language families must learn in order to advocate effectively • Why caregivers often have to retell the patient story at every appointment • The coordination work happening outside the medical record • How new tools are helping families prepare for clinical conversations • What healthcare systems could look like if caregivers were recognized as part of the care team This conversation explores the realities of caregiving inside complex healthcare systems and what leaders designing care models might learn from the families navigating them every day. If you care about patient advocacy, healthcare leadership, and the realities behind rare disease care, this episode surfaces a perspective too often missing from conversations about healthcare innovation and system design. Support the show

    50 min
  2. MAR 4

    The Enterprise AI Journey: From Data Foundations to Generative and Agentic AI | Gary Cao

    Send us Fan Mail Send a text AI strategy and AI governance at the enterprise level are not tool decisions. They are operating model decisions that determine whether organizations scale responsibly or stall. In this episode of The Signal Room, Chris Hutchins sits down with Gary Cao, Chief Data & Analytics / AI Officer, to explore the enterprise AI journey from an executive perspective. This conversation moves beyond hype and definitions. Instead, it focuses on what actually changes inside an organization when AI becomes strategic: Moving from AI experimentation to enterprise maturityIntegrating generative AI into structured data environmentsDeterministic systems vs. probabilistic reasoningThe role of semantic layers and data management bottlenecksAutomation vs. agentic AI systemsMeasuring enterprise ROI in an era of high abandonment ratesGary shares practical insight into AI maturity models, governance design, risk tolerance tiers, and the evolving role of the CDAO in coordinating strategy, technology, and accountability. If you are a board member, C-suite executive, data leader, or healthcare leadership team navigating AI strategy at scale, this episode provides a grounded view of what it takes to move from ambition to responsible AI execution. Connect with Gary Cao on LinkedIn: https://www.linkedin.com/in/garycao/ Subscribe to The Signal Room for conversations at the intersection of leadership, governance, and AI innovation. Support the show Support the show

    42 min
  3. Why AI Governance and Verification, Not Speed, Is the Real Bottleneck in Pharmaceutical Innovation

    FEB 18

    Why AI Governance and Verification, Not Speed, Is the Real Bottleneck in Pharmaceutical Innovation

    Send us Fan Mail AI is transforming drug discovery—but faster models alone do not get drugs approved. In this episode of The Signal Room, host Chris Hutchins speaks with David Finkelshteyn, CEO of Pivotal AI, about why verification—not speed or model accuracy—is the real bottleneck in pharmaceutical AI. David explains why generating AI-designed molecules without rigorous validation creates more risk than value, especially in regulated environments like pharma and healthcare. The conversation breaks down where AI outputs most often fail between discovery and regulatory acceptance, why black-box models struggle under scrutiny, and what it actually means to verify an AI insight in drug development. They also explore practical challenges around data integrity, auditability, missing context, hallucinations, and the growing use of consumer AI tools in health decisions. Rather than chasing hype, this episode focuses on how AI can responsibly accelerate drug development by failing faster, tightening verification loops, and building systems that can be defended to regulators, auditors, and clinicians. This episode is essential listening for leaders working in pharmaceutical R&D, healthcare AI, data science, AI governance, and regulated technology environments. Guest: David Finkelshteyn, CEO, Pivotal AI LinkedIn: https://www.linkedin.com/in/david-finkelshteyn-03191a130/ Support the show

    38 min
  4. No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'

    FEB 11

    No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'

    Send us Fan Mail This episode explores ethical leadership and AI governance challenges in healthcare cybersecurity, emphasizing the risks of undetected breaches.' In this episode of The Signal Room, Chris Hutchins speaks with Guman Chauhan, a cybersecurity and risk leader, about one of the most dangerous conditions in modern organizations: being breached and not knowing it. While dashboards stay green and alerts stay quiet, attackers increasingly operate using valid credentials, normal behavior patterns, and long dwell times—remaining invisible for weeks or months. Guman explains why “no alerts” is often mistaken for “no breach,” and why silence is one of the most misleading signals in cybersecurity. The conversation unpacks how attackers deliberately avoid detection, why security tools alone do not equal security outcomes, and where organizations create blind spots through untested assumptions, alert fatigue, and fragmented processes. They explore why undetected breaches are more damaging than known ones, how time compounds risk once attackers are inside, and what separates organizations that mature after incidents from those that repeat the same failures. Guman emphasizes that proven security is not built on policies, certifications, or dashboards—but on continuous testing, validated detection, and teams that know how to act under pressure. This episode is a practical guide for executives, security leaders, healthcare organizations, and regulated enterprises that need to move from assumed security to proven breach readiness. Guest: Guman Chauhan LinkedIn: https://www.linkedin.com/in/guman-chauhan-m-s-cissp-cism-600824103/ Topics Covered Why undetected breaches are more dangerous than known breachesHow attackers use valid credentials to avoid detectionWhy “no alerts” does not mean “no breach”Alert fatigue and the signal-to-noise problemSecurity tools vs security outcomesVisibility gaps, unknown assets, and logging failuresExternal penetration testing and real-world validationCultural and leadership factors in breach responseAssumed security vs proven securityKey Takeaways Silence is not security; it often means you are not seeing the right signals.Most breaches go undetected because attackers behave like legitimate users.Security tools do not fail—untested assumptions do.Alert fatigue hides real risk by normalizing noise.Proven security requires testing detection and response end to end.Mature organizations treat breaches as learning moments, not events to hide.Confidence without validation creates the most dangerous blind spots.Chapters / Timestamps 00:00 – Why undetected breaches are the real risk  02:30 – Being breached vs being breached and not knowing  06:00 – How attackers stay invisible using valid credentials  08:30 – Why dashboards and alerts create false confidence  10:00 – Common reasons breaches go undetected for months  13:30 – Security tools vs security outcomes  16:00 – Technology, process, and people failures  19:30 – Alert fatigue and finding real signals  22:30 – Why external penetration testing still matters  26:30 – What mature organizations do after a breach  31:00 – One action to improve breach readiness this year  32:45 – The uncomfortable question every leader should ask  34:30 – Assumed security vs proven security  36:30 – How to connect with Guman & closing Support the show

    34 min
  5. Scaling Care with Responsible AI: Healthcare Leadership, Human Judgment, and Clinical Trust

    FEB 4

    Scaling Care with Responsible AI: Healthcare Leadership, Human Judgment, and Clinical Trust

    Send us Fan Mail What does it truly mean to scale care with AI inside a real hospital environment? In this episode of The Signal Room, host Chris Hutchins talks with Mark Gendreau, emergency physician and Chief Medical Officer, about the intersection of healthcare AI, ethical leadership, and AI strategy. Together, they discuss how AI is transforming clinical workflows by amplifying human judgment rather than replacing it. They explore real-world applications in healthcare AI such as radiology co-pilots, ambient clinical documentation, and workflow intelligence designed to relieve clinician burnout. Dr. Gendreau highlights the need for responsible AI and human oversight in high-reliability healthcare settings. The conversation also covers critical topics like AI governance, clinical trust, alert fatigue, and leadership accountability. Listeners will gain insights into why successful AI adoption in healthcare depends on culture and ethical leadership, not just technology. This episode is essential for healthcare leaders, clinicians, informaticists, and policymakers seeking practical guidance on AI readiness, ethical AI practices, and driving AI strategies that improve patient care while maintaining human judgment at the core. Key Takeaways AI delivers the most value when it amplifies clinicians, not when it attempts to replace themHuman judgment is essential in high-risk clinical decisions, even with advanced AI supportAmbient documentation can dramatically reduce after-hours EHR work (“pajama time”)Alert fatigue is a governance problem, not just a technical oneTrust in AI is built through reliability, transparency, and clear ethical intentSuccessful AI adoption depends more on leadership and culture than IT executionInteroperability and governance are the biggest barriers to scaling AI across health systemsEmotional intelligence, empathy, and shared decision-making remain human responsibilitiesGuest Info Mark Gendreau, MD, MS, CPE Emergency Medicine Physician | Chief Medical Officer Dr. Gendreau is an experienced emergency physician and healthcare executive with deep expertise in clinical operations, patient safety, and responsible AI adoption. He focuses on using technology to improve access, quality, and clinician experience while preserving the human core of medicine. 🔗 LinkedIn: https://www.linkedin.com/in/markgendreaumd/ Chapters (YouTube & Spotify) 00:00 – Introduction and framing the AI scaling challenge  01:18 – Workforce scarcity and why AI must amplify clinicians  02:10 – AI in radiology: co-pilots, fatigue reduction, and safety  05:26 – Ambient documentation and eliminating “pajama time”  07:17 – Using AI to improve clinician communication and empathy  09:33 – Where AI falls short and why humans must stay in the loop  12:44 – Guardrails, trust, and human-AI partnership  13:44 – Trust in AI vs trust in human relationships  16:07 – Adoption curves and clinician buy-in  18:05 – Why AI fails when treated as an IT project  20:41 – Leadership’s role in shaping AI culture  22:07 – Interoperability, governance, and scaling challenges  26:04 – Signals that an organization is truly AI-ready  29:26 – Emotional intelligence and where AI should never lead  33:59 – Alert fatigue and governance accountability  37:27 – Measuring success: outcomes, equity, and pajama time  38:36 – How to connect with Dr. Gendreau  39:31 – Episode close Support the show

    34 min
  6. From AI Hype to Real Value: Crafting Healthcare AI Strategy for Business Impact

    JAN 28

    From AI Hype to Real Value: Crafting Healthcare AI Strategy for Business Impact

    Send us Fan Mail Healthcare AI strategy separates organizations that deliver real business value from those stuck in AI hype. The gap is not a technology problem. In this episode of The Signal Room, host Chris Hutchins sits down with Parth Gargish, a SaaS and AI product leader, to examine what separates AI initiatives that deliver measurable outcomes from those that stall after the proof of concept. Drawing from extensive experience in AI-driven product development, Parth shares practical insights on building AI-first approaches that prioritize ethical leadership, responsible adoption, and workforce readiness alongside technical execution. The conversation moves beyond theoretical frameworks to address how organizations can identify the right use cases, build cross-functional alignment, and measure AI impact in terms that business stakeholders actually care about. Key themes include why AI strategy must be anchored in business outcomes rather than model capabilities, how to build trust in AI systems through transparency and human oversight, and what workforce enablement looks like when AI reshapes roles and responsibilities. The discussion also addresses the risks of deploying AI without clear governance and the organizational readiness required to sustain AI-driven value over time. This episode is essential for healthcare leaders, product managers, and AI strategists navigating the transition from AI experimentation to operational impact. Guest: Parth Gargish LinkedIn: linkedin.com/in/parth-gargish Support the show Support the show

    26 min

About

Healthcare AI leadership, ethics, and LLM strategy—hosted by Chris Hutchins. The Signal Room explores how healthcare leaders, data executives, and innovators navigate AI readiness, governance, and real-world implementation. Through authentic conversations, the show surfaces the signals that matter at the intersection of healthcare ethics, large language models (LLMs), and executive decision-making.