The Signal Room | AI in Healthcare & Ethical AI

Chris Hutchins | AI Governance & Ethical AI Leadership Expert

Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities. Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly. Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness. Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.

  1. 5D AGO

    The AI Shutdown is Here: Why Most Projects Will Fail in 2026 | AI Governance & Ethical AI Strategy with Andre Samokish

    Send us Fan Mail In this timely episode of The Signal Room, host Chris Hutchins speaks with AI governance and privacy expert Andre Samokish about the looming AI shutdown in 2026 and why most AI projects will fail without strong governance, ethical frameworks, and strategic leadership. You’ll learn: The critical difference between privacy governance, AI governance, and cybersecurity—and why conflating them creates dangerous blind spotsWhy governance isn’t a project blocker but the essential pathway to moving fast, safelyThe three pillars of true AI literacy for both technical and non-technical teamsHow to embed privacy by design and ethical controls into AI product workflows before launchThe most common failure modes in data collection, model deployment, and organizational culture that threaten AI successPractical tools, certifications, and communities to build your AI governance knowledge todayThis episode is a must-listen for healthcare AI leaders, strategists, and executives aiming to navigate AI transformation with responsible, ethical leadership and avoid project shutdowns. Connect with Andre Samokish and learn how to future-proof your AI initiatives with robust AI governance and strategy. Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    43 min
  2. APR 1

    Good People Are Quietly Quitting: Ethical Leadership, AI Strategy & Why Culture Determines AI Success | Carly Caminiti

    Send us Fan Mail Ethical leadership failures are quietly driving your best people out the door. Carly Caminiti joins Chris Hutchins to explore the intersection of AI leadership strategies, organizational culture, and why ethical leadership is the foundation that determines whether AI transformation succeeds or collapses from within. This episode examines how leadership ethics shape AI adoption, why AI coaching for leaders must address culture before technology, and what happens when organizations pursue AI strategy without investing in the people who execute it. Topics covered: ethical leadership in AI-driven organizations, AI leadership strategies for healthcare and enterprise, leadership and AI cultural alignment, AI coaching for leaders navigating transformation, leadership ethics in the age of automation, responsible AI development through workforce trust, and why quiet quitting is a governance signal leaders cannot afford to ignore. If you lead teams through AI transformation, advise on leadership development, or care about building organizations where ethical AI and ethical leadership reinforce each other, this conversation will challenge your assumptions. Listen to The Signal Room for expert insights on AI governance, AI strategy, and ethical AI in healthcare and beyond. About the guest: Carly Caminiti is a leadership coach and burnout prevention specialist who works with executives and teams across healthcare and corporate environments. She is the creator of the 5C Leadership Performance System, a 12-week coaching& Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    49 min
  3. MAR 25

    Healthcare Experts on Ethical AI in Operational Reality: AI Transformation Strategies and Healthcare Innovation | Markeisha Snaith

    Send us Fan Mail Healthcare experts are redefining what AI transformation strategies look like inside real health systems. MarKeisha Snaith joins Chris Hutchins to share how healthcare innovation and ethical AI converge in the daily operational reality of clinical leadership. This episode explores what healthcare experts actually encounter when implementing AI transformation strategies, how AI governance decisions play out at the bedside, and why healthcare innovation leadership requires both technical fluency and deep operational empathy. Topics covered: healthcare experts navigating AI adoption, AI transformation strategies for health systems, AI healthcare innovations in clinical operations, healthcare innovation leadership, ethical AI in healthcare operations, AI governance in practice, healthcare leadership and responsible AI, and the gap between AI strategy and clinical execution. For healthcare leaders, AI strategists, and anyone advising on AI transformation in healthcare, this conversation surfaces the operational truths that conference keynotes rarely address. Listen to The Signal Room for expert insights on AI governance, AI strategy, and ethical AI in healthcare and beyond. Timestamps: 00:00 Introduction: What healthcare experts really face with AI transformation 03:30 MarKeisha Snaith on AI governance in clinical reality 10:00 AI transformation strategies that survive contact with operations 17:00 AI healthcare innovations: what is working and what is not 24:00 Healthcare innovation leadership at the intersection of tech and care 31:00 Ethical AI when patient outcomes depend on the model 37:00 Building healthcare leadership capacity for AI readiness 43:00 The future of AI transformation strategies in health systems Resources mentioned: - The Signal Room: https://www.signalroompodcast.com - Hutchins Data Strategy Consultants: https://www.hutchinsdatastrategy.com - Subscribe on Apple Podcasts, Spotify, and YouTube Music Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    53 min
  4. MAR 18

    Healthcare AI and Rare Disease Caregiving: Why Patient Advocates Deserve a Seat at the Table | Amanda Roser

    Send us Fan Mail What happens when the healthcare system was not designed for the complexity your family lives with every day? Amanda Roser knows. As the mother of a child diagnosed with glycogen storage disease type zero, a rare genetic metabolic disorder, Amanda has spent five years navigating fragmented care coordination across endocrinology, genetics, metabolic medicine, and gastroenterology. She has become her son's primary record keeper, medical translator, and patient advocate, roles the healthcare system quietly requires but rarely recognizes. In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Amanda to explore how rare disease caregivers operate inside a healthcare infrastructure that was built for siloed specialties rather than complex, multi-system patients. Amanda shares how she trained an AI tool on her son's daily health patterns, lab history, and symptom tracking to communicate more effectively with physicians and surface treatment options that might otherwise be missed. During one hospital stay, she showed AI-generated lab analysis to her son's physician, and it changed the clinical conversation in real time. This episode challenges healthcare leaders, system designers, and technology teams to rethink how care coordination, interoperability, and patient advocacy intersect. Amanda and Chris discuss the operational burden caregivers carry between appointments, the communication breakdowns that put patients at risk, and why families deserve to be treated as essential members of the care team rather than passive recipients of clinical decisions. If you work in healthcare innovation, digital health, AI governance, or clinical system design, this conversation will reframe how you think about the end user of every system you build. Key topics covered in this episode: (00:00) Amanda's story: an ER dismissal that became a turning point for caregiver advocacy (02:12) What caregivers expect vs. what the healthcare system actually delivers (05:18) Becoming the coordinator: when parents realize the system depends on them (10:12) The invisible operational burden families carry between appointments (13:30) Gaps in patient tracking, documentation, and clinical communication (16:21) Learning medical terminology as a non-clinical caregiver (21:10) Interoperability failures and the "Groundhog Day" problem of retelling your story (26:55) The emotional and physical toll of caregiving in a fragmented system (30:04) Two real scenarios where care coordination broke down (35:22) How Amanda uses AI to translate, analyze labs, and prepare for appointments (40:37) What healthcare systems should change first to recognize caregivers (43:29) The signal healthcare leaders are missing from patient advocates (48:42) Why patient advocac Humanizing AI for Care.Empowering healthcare with ethical, scalable AI and data strategies that work.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    50 min
  5. MAR 13

    AI Regulation in ER and Clinical Judgment: Why AI Tools Must Be Designed for 3 AM, Not 3 PM | Dr. Natasha Dole

    Send us Fan Mail Understand AI regulation and its impact on emergency healthcare alongside the enduring value of clinical judgment. Dr. Natasha Dole examines ethical leadership in healthcare AI. Emergency departments expose every weakness in AI systems because they demand speed, accuracy, and adaptive decision-making simultaneously. This conversation delivers a candid assessment of AI implementation in one of healthcare's most challenging environments. Trust gaps between emergency physicians and AI tools are not abstract concerns; they have direct consequences for patient outcomes. Emergency medicine environments reveal where AI systems lack contextual awareness and clinical nuance, making implementation failures visible immediately. Clinical expertise developed through years of emergency practice cannot be replicated by algorithms that lack the situational awareness experienced physicians develop. Topics covered: AI in emergency medicine implementation, clinical judgment vs. algorithmic recommendations, trust gaps in healthcare AI, emergency department workflows, digital health leadership in clinical settings, and the boundary between AI support and clinical authority. Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    44 min
  6. MAR 4

    Enterprise AI Journey: Agentic AI, Generative AI and Data Foundations in Healthcare | Gary Cao

    Send us Fan Mail What does it actually mean when an organization says it is on an AI journey? In most cases, according to Gary Cao, it means a vague intention and a handful of disconnected projects, without a holistic framework or a roadmap for the next three to five years. As a chief data, analytics, and AI officer with 30 years of experience across eight companies spanning healthcare, financial services, and multiple industries, Gary has built and led enterprise AI capabilities from the ground up. He brings a perspective shaped not by theory but by accountability. In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Gary to unpack the four pillars that determine whether an AI initiative delivers value or stalls: business strategy, analytics and innovation, data management, and technology infrastructure. Gary explains why technology gets the most visibility and budget, data management stays below the surface, analytics culture remains underappreciated, and business strategy is the hardest conversation to have. He also draws a critical distinction between generative AI and traditional data analytics, arguing that without the right data foundation, generative AI produces helpful but ultimately superficial outputs that will not fundamentally change decision processes or workflows. The conversation moves into probabilistic versus deterministic thinking and why executives must become comfortable making decisions in ranges rather than exact answers. Gary introduces a three-part ROI scorecard that balances direct revenue impact, cost avoidance, and qualitative benefits that are hard to quantify but strategically essential. He also addresses the philosophical tension around workforce upskilling: if AI reduces headcount, why would employees welcome its adoption? If you lead enterprise data and analytics functions, advise boards on AI investment, or manage the translation between business strategy and technical execution, this episode maps the journey from crawl to walk to run. Key topics covered in this episode: (00:00) Gary on the CFO regret: we should have invested in data analytics years ago (00:24) Introduction: Gary Cao's career across healthcare, financial services, and enterprise AI (01:46) What organizations really mean when they say they're on an AI journey (02:39) Four pillars of AI maturity: business strategy, analytics, data management, technology (05:01) Enterprise AI framework from 30 years across eight companies (07:13) The hidden cost beneath technology contracts: getting data fit for use (09:33) Three layers of AI: traditional analytics, NLP and image processing, generative AI (12:41) The tension between enterprise systems and probabilistic AI models (13:13) Healthcare versus financial services: different tolerance for accuracy (18:07) Does generative AI need different governance than traditional analytics? (20:39) How executives should think about risk tolerance in probabilistic decision making (24:57) Historical bias in data and why governance must create space for judgment (26:25) Workforce upskilling and the phi Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    42 min
  7. FEB 25

    From AI Strategy to Execution: Ethical Leadership, Trust and the Operational Reality of Healthcare AI | Brian Sutherland

    Send us Fan Mail Why do so many healthcare AI pilots stall before reaching enterprise scale? Brian Sutherland has seen the pattern from inside some of the largest healthcare organizations in the country. As an elite AI product manager and advisor focused on customer-facing AI and high-consequence healthcare environments, Brian built Humana's first member-facing intelligent virtual assistant, a platform now generating more than $7 million in annual savings while improving patient experience, including a 31% lift in task completion and measurable gains in satisfaction. In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Brian to examine why AI initiatives commonly fail across leadership alignment, workflow design, and operating models simultaneously. Brian makes the case that technology is moving faster than people are willing to change, and that friction point is the most underestimated variable in any deployment plan. The conversation turns to a paradox shaping healthcare AI adoption: trust in human relationships has eroded significantly, yet organizations remain too quick to trust technology without adequate governance.  Brian outlines a simple model for AI oversight: treat every system like a junior employee. Use structured onboarding (30/60/90 days), assign support, retrain as policies change, and expect mistakes. Design for failure—especially PHI breaches—with clear reporting and containment already in place. He also emphasizes governance as an enabler, the role of diverse perspectives in exposing blind spots, and the need for a human in the loop to maintain trust in care.  Key topics covered in this episode:  (00:00) Brian Sutherland: AI in high-stakes, customer-facing healthcare  (01:48) Why AI fails: leadership, workflow, and operating model gaps  (02:42) AI is still early: the learning curve organizations underestimate  (03:08) Adoption gap: technology outpaces human change  (03:51) The trust paradox: declining human trust, rising AI reliance  (05:22) Why pilots don’t scale: the 80/20 trust problem  (06:25) Missing roles in AI execution and delivery  (07:13) Learn before building: operating the workflow first  (09:13) Workflow reality: designing on outdated processes  (09:56) The gap between executive approval and frontline use  (11:54) Governance done right: enablement vs. bureaucracy  (14:30) Diverse perspectives: reducing AI blind spots  (16:06) Resistance when blind spots force course correction  (17:44) Governance as multi-perspective decision making  (19:12) What organizational trust in AI actually looks like  (21:35) Clinician trust: AI support without damaging relationships  (22:28) Human-in-the-loop in personal healthcare decisions  (24:29) Limits of AI empathy in patient interactions  (26:48) Designing pause points for human judgment  (27:46) Hallucinations and limits of generative AI  (29:00) From early internet to AI: pace of change  (31:16) Treating AI like a junior employee: training and oversight  (33:35) Expiring approvals: governa Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    41 min
  8. Why AI Verification Is the Real Bottleneck in Pharmaceutical Drug Discovery | David Finkelshteyn

    FEB 18

    Why AI Verification Is the Real Bottleneck in Pharmaceutical Drug Discovery | David Finkelshteyn

    Send us Fan Mail AI can now search through a vastly wider grid of possible compounds and molecules than any human team could evaluate in a lifetime. The bottleneck is no longer discovery speed. It is verification. David Finkelshteyn, CEO of Pivotal AI, builds AI systems for pharmaceutical and life sciences use cases that can be verified, defended, and trusted. His work sits at the intersection where machine learning outputs must survive regulators, audits, and real-world consequences involving human health. In this episode of The Signal Room, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with David to examine why generating AI-designed drug candidates without rigorous verification is fundamentally meaningless. David explains that discovery and verification are inseparable. A model can propose thousands of novel molecules, but each must pass through pharmacokinetics screening, in vitro testing, in vivo validation, clinical target selection, and human trials. Every stage is a potential breakpoint, and AI introduces a new category of risk: when models design molecules that look unlike anything seen in nature, there is almost no historical data to predict how the human body will respond. The conversation covers the complexity-transparency tradeoff in machine learning, where more complex tasks demand more complex models that become less explainable. David walks through what real verification looks like in drug development, including the critical importance of separating training data from validation data to avoid overfitting and data leakage. He also addresses the emerging consumer health AI landscape and offers a practical rule: give the model more context to reduce hallucination, treat it as an analytics tool rather than an inventor, request real source references, and then go see your doctor. The episode closes with David's outlook on how verification will shift from constraint to competitive advantage as automated robotic labs begin closing the design-verification loop, reducing the time between AI-proposed candidates and physical synthesis and testing. (00:02) Teaser: Human readiness vs. technical readiness in healthcare AI  (00:38) AI in drug discovery: expanding compound search space  (01:00) David Finkelshteyn (Pivotal AI): building defensible AI systems  (02:00) Discovery vs. verification: why validation is critical  (03:51) AI due diligence in an emerging field  (04:26) Drug development stages: synthesis to human trials  (07:08) Novel AI molecules and the verification gap  (07:48) Validation standards remain unchanged for AI  (08:20) Faster R&D: compressing timelines with AI  (09:22) COVID vaccines: early signal of AI acceleration  (09:56) Black box problem: limits of model explainability  (11:58) Complexity vs. transparency tradeoff  (12:08) Explainability gaps in clinical and regulatory settings  (13:31) Verifying AI outputs: use case, data quality, leakage risks  (16:22) Missing context in consumer health AI  (17:33) Responsible use: verify sources, consult clinicians  (19:55) Incomplete context as a primary sou Support the show About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified. Website: https://www.hutchinsdatastrategy.com  LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/  YouTube: https://www.youtube.com/@ChrisHutchinsAi Book Chris to speak:  https://www.chrisjhutchins.com

    38 min

About

Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities. Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly. Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness. Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.