AI is advancing rapidly in healthcare, but speed without governance can create serious clinical, legal, and enterprise-level risks. In this episode, Peter Bonis, MD, Chief Medical Officer at Wolters Kluwer Health, explores how his team is developing reliable, evidence-based AI tools to support frontline clinicians without compromising trust or safety. He explains why clinicians are adopting AI faster than health systems can govern it, creating risks like the rise of “shadow AI.” Peter highlights the importance of transparency, human oversight, and trusted source material in ensuring safe clinical decision support. He also discusses how even experts can be influenced by faulty AI output and why governance models must evolve with frontline input to keep pace. Tune in and learn how healthcare leaders can approach AI governance more responsibly while still supporting innovation at the point of care. About Peter Bonis: Peter A.L. Bonis, MD, is Chief Medical Officer at Wolters Kluwer Health, where he serves on the executive leadership team and oversees content, informatics, and industry partnerships. He was also part of the founding group at UpToDate, which grew into one of the world’s leading evidence-based clinical knowledge resources. Dr. Bonis is a physician entrepreneur and academic leader with more than 25 years of experience in healthcare. At Wolters Kluwer Health, he leads teams spanning physicians, pharmacists, engineers, informaticists, and business development professionals focused on clinical content, drug information, and patient engagement solutions. He is also an adjunct professor of medicine in the division of gastroenterology at Tufts University School of Medicine and previously served on the faculty at Yale University School of Medicine. His background also includes healthcare investing, advisory work with venture capital and private equity firms, more than 70 published research articles, and the co-founding of the UpToDate donations program in partnership with Ariadne Labs to expand access in low- and middle-income countries. He earned his AB from Harvard University and his MD from New York University. Things You’ll Learn: AI in healthcare must be grounded in trusted evidence, transparency, and human oversight to be safe for clinical use. Shadow AI is already creating enterprise-level risk as clinicians and staff adopt unvetted tools outside formal governance structures. Even experienced specialists may not reliably detect faulty AI-generated clinical information when it appears credible. Healthcare leaders need governance models that focus on safety, privacy, compliance, and real clinical value rather than hype. Effective AI governance must be iterative and shaped by input from frontline clinicians who understand how these tools affect care delivery. Resources: Connect with and follow Dr. Peter Bonis on LinkedIn! Follow Walters Kluwer Health on LinkedIn and visit their website.