Digital Pathology Podcast

Aleksandra Zuraw, DVM, PhD

Aleksandra Zuraw from Digital Pathology Place discusses digital pathology from the basic concepts to the newest developments, including image analysis and artificial intelligence. She reviews scientific literature and together with her guests discusses the current industry and research digital pathology trends.

  1. 6D AGO

    211: USCAP2026-What Real Life Lab Partnership Looks Like in Digital Pathology with Hamamatsu & Agilent Technologies

    Send us Fan Mail Why do digital pathology projects get harder once the real workflow starts? In this USCAP 2026 conversation, I talk with Robert Moody from Hamamatsu and Jake Eden from Agilent about what the conference theme, MAKING CONNECTIONS, looks like in actual digital pathology implementation. This was not just a conversation about products. It was a conversation about workflow. We talked about why consistent staining matters before scanning, why strong partnerships need a shared vision, and why labs increasingly want a simpler point of contact as they move into digital pathology.   One point I really liked is that the value of a partnership is no longer just in combining components. It is in reducing complexity for the lab. Robert and Jake explain how vendors increasingly act as guides during digital transformation, helping customers navigate technical decisions, implementation steps, and the many stakeholders involved beyond pathology itself. That includes IT, information security, legal, finance, and lab operations.  Another key theme is that no two deployments look the same. Some labs are centralized. Some are hub-and-spoke. Some outsource parts of the workflow. That is why future-proofing came up so strongly in this episode. Jake talks about keeping options open with open, agnostic workflows, and Robert makes the practical point that the most expensive thing you can do is the same implementation twice.  Key highlights [00:22] Why this episode moves from high-level partnerships to what they look like in the lab [02:33] Why staining consistency matters for successful digital workflows [03:14] Shared vision, relationships, and why partnerships start with people [05:29] The idea of a single point of contact to reduce complexity for labs [08:32] Why vendors have become digital pathology guides [10:03] Why every deployment is unique [14:22] Future-proofing and choosing open, agnostic workflows [15:46] Why doing the same implementation twice is the expensive mistake to avoid Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    17 min
  2. MAR 27

    210: Why Partnerships Matter in Digital Pathology with Hamamatsu

    Send us Fan Mail  Why does digital pathology adoption move faster in some places than others?  In this USCAP 2026 conversation, I sat down with Robert Moody and Fumiya Fuji from Hamamatsu to talk about what the conference theme, MAKING CONNECTIONS, really looks like in practice. This was not just a scanner conversation. It was a workflow conversation.  We talked about why digital pathology has shifted from a scanner-first mindset to a solution-first one, and why that matters for labs trying to build workflows that actually work. Robert explained why partnerships now need to happen earlier, with software, hardware, and execution teams involved from the start. Fumiya added a global perspective, comparing adoption drivers across the US, Japan, Europe, and Canada, and explaining why local support systems, ROI, geography, and government backing can all change the pace of adoption.   One point I especially liked was this: digital pathology is not one product. It is an ecosystem. And if one component fails, the whole workflow can break down. That is why connected thinking matters so much right now. This episode is really about how companies, labs, and partners are learning to work more like a team.   Key highlights  [00:00] Why MAKING CONNECTIONS fits digital pathology so well [01:37] Why partnerships matter beyond the scanner [04:29] The shift from scanner-first to solution-first[04:58] How adoption differs across the US, Japan, Europe, and Canada [09:01] Why global collaboration inside Hamamatsu matters [10:50] How partnerships move from paper to real-world execution [12:55] Why does the USCAP show floor show a more connected industry [14:37] Why the next phase of digital pathology depends on interoperability and connected workflows  Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    16 min
  3. MAR 23

    209: USCAP 2026: Digital Pathology 101 With Hamamatsu

    Send us Fan Mail What makes digital pathology feel so hard to enter, even for smart people already working around it? In this special USCAP conversation, Stephanie Fullerton from Hamamatsu turns the tables and interviews me about Digital Pathology 101 — the book I wrote for people who are starting or continuing their digital pathology journey. We talk about why the book is not meant to be an exhaustive manual, but a practical framework. A way to help people see the full picture, ask better questions, and understand how the pieces of digital pathology fit together.  One of the biggest themes in this conversation is that digital pathology is a team effort. It is not just pathology. It involves scanners, software, image analysis, engineers, vendors, and people who often do not speak the same professional language.  That matters because sometimes getting the right answer starts with asking the right question.   We also talk about the challenge of translating expert knowledge into beginner-friendly language, why vendors often become guides as labs go through digital transformation, and why I think a shared vocabulary can make implementations smoother and more collaborative. Toward the end, we shift into the fun side of USCAP: signed book giveaways, stickers, pins, and ways to make connections at the conference.   Topics discussed [00:03] Why Stephanie interviewed me this time, and the idea behind Digital Pathology 101[01:07] What the book is actually for: a framework, not a one-size-fits-all manual [04:07] The hardest part of writing for beginners without talking down to them [06:26] Why digital pathology implementation feels like a mountain, and how to lower the barrier [08:15] Why a shared vocabulary matters in digital pathology teams [09:44] Translating between pathologists, engineers, vendors, and marketing [11:26] Why vendors and partners often become guides during digital transformation [12:33] Who the book is for, including students and early-career professionals [13:33] Book signing, giveaways, and where to find me at USCAP [19:05] Stickers, pins, and why small things can help start real conversations at conferences  Resources mentioned Digital Pathology 101Hamamatsu Booth 312 at #USCAP2026 in San Antonio, Texas  My histology and microscopy videos on YouTube Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    14 min
  4. MAR 21

    205: What Makes AI Useful in Pathology Beyond the Demo?

    Send us Fan Mail What happens when AI looks strong in a paper, but the workflow still isn’t ready? In DigiPath Digest #40, I reviewed five recent papers across kidney pathology, oral and maxillofacial pathology, glioma biomarker prediction, digital twins in neuro-oncology, and a major European colorectal cancer cohort. A common theme kept coming back: good performance is not the same thing as real-world readiness. We started with kidney biopsies and the challenge of assessing interstitial fibrosis and tubular atrophy, where AI shows promise but still does not fully agree with humans. That led into a bigger point I keep seeing in digital pathology: our “ground truth” is often based on human interpretation, and human interpretation has variability too. From there, I looked at AI in oral and maxillofacial pathology, where the field is still early and one major bottleneck is the lack of strong public datasets. Then I discussed a systematic review on adult-type gliomas showing that multimodal models performed better than unimodal ones, which makes sense when you think about how pathologists actually work: we do not diagnose from one input alone. I also covered a systematic review on digital twins in neuro-oncology. The idea is exciting, but the paper makes it clear that reproducibility, public code, multimodal integration, and external validation are still limiting factors. And finally, I talked about a paper I really liked: a large European colorectal cancer cohort built across 26 biobanks in 12 countries. That kind of harmonized, quality-checked dataset matters. A lot. Because better AI starts with better data. In this episode, I discuss:  Why AI vs human comparisons are harder than they first look  the “gold standard paradox” in pathology  Why multimodal AI keeps outperforming unimodal models  What is holding digital twins back from broader use  Why curated multicenter datasets are so important for digital pathology research Resources mentioned:  Digital Pathology 101 pdf copy Pathology AI Makeover Course  DigiPath Digest AI-powered paper summaries Papers discussed:    https://pubmed.ncbi.nlm.nih.gov/41830415/https://pubmed.ncbi.nlm.nih.gov/41826004/https://pubmed.ncbi.nlm.nih.gov/41824546/https://pubmed.ncbi.nlm.nih.gov/41823607/https://pubmed.ncbi.nlm.nih.gov/41820399/ Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    33 min
  5. MAR 9

    196: DigiPath Digest #39 - If AI Sees More Than We Do. What Makes It Clinically Trustworthy?

    Send a text If AI can detect patterns we cannot see, how do we know when its answers are clinically trustworthy? In this episode of DigiPath Digest #39, I explore a big-picture question in digital pathology and medical AI. Many models now match or even exceed human performance in specific diagnostic tasks. But most of that evidence comes from controlled or retrospective datasets. So what happens when we try to bring these tools into real clinical workflows? I review four recent papers that help frame this challenge and point toward the next steps for trustworthy AI in healthcare.  You will hear about the role of prospective validation, real-world effectiveness, transparent reporting standards, and multimodal data integration as recurring themes across these studies. Key Highlights 00:00 – Introduction What do we do when AI detects signals that humans cannot see? The core challenge is verifying those outputs before trusting them in clinical decision making.  03:32 – AI Across the Healthcare Continuum A narrative review shows AI achieving clinician-level performance in well-defined imaging tasks, including digital pathology. But most evidence comes from retrospective or controlled environments, and prospective validation remains limited.  08:34 – Multi-Omics and AI in Gastric Biopsy Diagnostics Morphology alone cannot fully capture molecular heterogeneity or predict disease progression. Integrating genomics, proteomics, metabolomics, and other omics with AI is shifting gastric pathology toward data-driven precision gastroenterology.  13:38 – Hyperspectral Imaging for Real-Time Surgical Guidance Spectral imaging can analyze tissue composition during surgery without staining, freezing, or contact with the tissue. Studies show promising sensitivity for detecting malignancy and supporting intraoperative decision making.  17:20 – REFINE Reporting Guideline for Foundation Models and LLMs An international consensus guideline introduces a 44-item reporting checklist to standardize how AI studies are described. The goal is transparent, reproducible, and comparable research in medical AI.  22:35 – Big Takeaway AI should be viewed as clinical decision support, not a replacement for clinicians. Real-world validation, ethical governance, and reproducible research standards will determine how these tools enter pathology workflows.  References (Articles Discussed) Artificial Intelligence in Healthcare: From Diagnosis to Rehabilitation  https://pubmed.ncbi.nlm.nih.gov/41755929/ Transforming Gastric Biopsy Diagnostics: Integrating Omics Technologies and Artificial Intelligence  https://pubmed.ncbi.nlm.nih.gov/41751306/ From Image-Guided Surgery to Computer-Assisted Real-Time Diagnosis with Hyperspectral and Multispectral Imaging  https://pubmed.ncbi.nlm.nih.gov/41750768/ REFINE Reporting Guideline for Foundation and Large Language Models in Medical Research  https://pubmed.ncbi.nlm.nih.gov/41762555/ If you enjoy staying current with digital pathology and AI research, this episode will help you connect the dots between promising algorithms and practical clinical adoption. Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    27 min
  6. MAR 2

    191: Hallucinations, Agents, and AI in Pathology

    Send a text Clinical Artificial Intelligence in 2026. Accuracy, Education, and Guardrails Artificial intelligence is evolving fast in medicine. But how accurate is it. And are we building it safely? In this episode of DigiPath Digest, I review five new studies shaping digital pathology, radiology, burn diagnostics, and agent-based large language model systems. We discuss accuracy gains, hallucination filtering, education challenges, and why safeguards are essential before clinical deployment. Clear. Practical. Evidence-based. ⏱ Topics & Timestamps [00:02] Introduction Weekly journal club on digital pathology and artificial intelligence. [05:13] Hallucination Filtering in Radiology Using Discrete Semantic Entropy to detect hallucination-prone responses in Vision Language Models. Accuracy improved from 51.7 percent to 76.3 percent after filtering high-entropy answers. [15:04] Artificial Intelligence in Pathology Training Supervised use during residency. Balancing artificial intelligence adoption with preservation of morphological analysis and critical thinking. [20:12] Colorectal Cancer Lymph Node Detection Two-stage classification and segmentation model in Whole Slide Imaging. Recall 1.0. Specificity 0.935. Dice coefficient 0.818. Artificial intelligence as a second opinion. [25:04] Burn Depth Prediction with Artificial Intelligence Tissue Doppler Elastography and Harmonic B-mode ultrasound combined with artificial intelligence. 90 to 95 percent accuracy in human subjects. [31:20] Agent-Based Large Language Model Systems OpenManus and Manus evaluated in clinical simulations. Up to 60.3 percent accuracy. High computational cost. 89.9 percent of hallucinations filtered by safeguards. [40:08] Patient Access to Pathology Images Why viewing pathology slides can empower patients and improve communication. Resources https://pubmed.ncbi.nlm.nih.gov/41720937/https://pubmed.ncbi.nlm.nih.gov/41720644/https://pubmed.ncbi.nlm.nih.gov/41716065/https://pubmed.ncbi.nlm.nih.gov/41709317/https://pubmed.ncbi.nlm.nih.gov/41708802/Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    30 min
  7. FEB 24

    190: Can a Better Stain Improve AI in Pathology?

    Send a text What if one of the biggest sources of diagnostic variability in prostate cancer isn’t the pathologist—but the stain we’ve trusted for decades? In this episode, I speak with Professor Ingid Carlbom, founder of CADESS.AI, about a different way to approach prostate cancer grading—by rethinking staining, segmentation, and AI decision support from the ground up. We explore why 30–40% interobserver variability persists in Gleason grading and how optimized stains combined with explainable AI can significantly reduce that uncertainty. Ingrid shares her journey from applied mathematics and computer science into pathology, the skepticism she faced in 2008, and why CADESS.AI chose not to “optimize H&E,” but instead developed a Picrosirius red + hematoxylin stain designed specifically for computational pathology. We discuss how grading at the gland and cellular level improves reproducibility, why explainability matters for trust, and what it really takes to build both stain and software as a single diagnostic workflow. This conversation challenges long-held assumptions—and asks whether improving data quality should come before building smarter algorithms. Highlights: [00:00–01:08] The problem: 30–40% disagreement in prostate cancer grading[01:08–03:03] Ingrid’s path from applied math to digital pathology[03:03–04:58] Early skepticism toward AI in pathology and fear of replacement[04:58–08:56] Why H&E limits segmentation—and how a new stain changes that[10:55–15:09] Clinical testing: non-inferiority, AI assistance, and NCCN risk stratification[19:47–22:59] Explainable UI: color-coded glands and pathologist override[26:16–27:29] Why grading glands (not whole slides) reduces variability[38:09–41:47] Regulatory challenges of combined stain + AI devices[45:52–48:55] The future of optimized stains in routine pathology Resources from This Episode CADESS.AI – Prostate cancer decision support systemNCCN prostate cancer risk stratification guidelinesSupport the show Get the "Digital Pathology 101" FREE E-book and join us!

    56 min
  8. FEB 24

    189: Digital Pathology Deployment Decoded the Rigorous 4 Phase Framework

    Send a text Sometimes a paper comes out that’s so practical and relevant to what we do in digital pathology that I know we have to talk about it. In this episode, I dive into “A Guide for the Deployment, Validation and Accreditation of Clinical Digital Pathology Tools” from Geneva University Hospital (HUG) — one of the most useful, real-world frameworks I’ve seen for bringing digital pathology tools safely into clinical practice. If you’ve ever built an AI model and wondered, “Now what?”, this episode is for you. Because building the model is often the easy part — deployment is where things get complex. This guide breaks the process into four practical phases every lab can follow: 1️⃣ Pre-Development – Define your clinical need, project scope, and validation plan before writing a single line of code. 2️⃣ Development – Build and integrate the algorithm in a production-ready environment. 3️⃣ Validation & Hardening – Turn your research code into a reliable, secure, and compliant clinical tool. 4️⃣ Production & Monitoring – Keep the tool validated and performing consistently over time. We also discuss what makes qualification, validation, and accreditation different — and why that order really matters. You’ll hear about the multidisciplinary team behind these deployments, especially the deployment engineer (DE) — the technical linchpin who turns AI research into clinical reality. I share the story of HUG’s H. pylori detection tool, which cut diagnostic time by 26% while maintaining a 0% false negative rate. The team’s secret? Careful planning, quality control, and continuous user feedback — not just great code. Other highlights include: Why integration often takes longer than building the AI model itselfHow to avoid invalidating your validation dataWhat continuous performance monitoring looks like in real labsAnd why every lab still needs to do local validation, even with proven toolsIf you’re working on digital or computational pathology tools — or just want to understand how AI safely moves from research to routine diagnostics — this episode will give you a roadmap grounded in real experience. 🎧 Listen now to learn how to move from algorithm to accreditation, step by step. And if you’re just getting started in digital pathology, I’d love to give you my free eBook, Digital Pathology One-on-One: All You Need to Know to Start and Continue Your Digital Pathology Journey. You’ll find the link to download it in the show notes. See you in the episode! Support the show Get the "Digital Pathology 101" FREE E-book and join us!

    23 min

Ratings & Reviews

5
out of 5
7 Ratings

About

Aleksandra Zuraw from Digital Pathology Place discusses digital pathology from the basic concepts to the newest developments, including image analysis and artificial intelligence. She reviews scientific literature and together with her guests discusses the current industry and research digital pathology trends.

You Might Also Like