Healthcare AI isn’t a tech problem—it’s a mirror reflecting how our health system already fails. Uncomfortable truths from Datapalooza 2025.
Summary
We’re asking the wrong questions about AI in healthcare. Instead of debating whether it’s good or bad, we need to examine the system-eating-its-tail contradictions we’ve created: locking away vital data so AI learns from everything except what matters most, demanding transparency from inherently secretive companies, and fearing tools could make us lazy instead of more capable. Privacy teams protect data, tech companies build tools, regulators write rules—everyone’s doing their part, but no one steps back to see the whole dysfunctional picture. AI in healthcare isn’t a technology problem; it’s a mirror reflecting how our health system already falls short with privacy rules that hinder progress, design processes that exclude patients, and institutions that fear transparency more than mediocrity. The real question is whether we’re brave enough to fix these underlying problems that AI makes impossible to ignore.
Click here to view the printable newsletter with images. More readable than a transcript, which can also be found below.
Contents
Table of Contents
Toggle
Please comment and ask questions:
- at the comment section at the bottom of the show notes
- on LinkedIn
- via email
- YouTube channel
- DM on Instagram, TikTok to @healthhats
Production Team
- Kayla Nelson: Web and Social Media Coach, Dissemination, Help Desk
- Leon van Leeuwen: editing and site management
- Oscar van Leeuwen: video editing
- Julia Higgins: Digit marketing therapy
- Steve Heatherington: Help Desk and podcast production counseling
- Joey van Leeuwen, Drummer, Composer, and Arranger, provided the music for the intro, outro, proem, and reflection
- Claude, Perplexity, Auphonic, Descript, Grammarly, DaVinci
Podcast episode on YouTube
Inspired by and Grateful to:
Christine Von Raesfeld, Mike Mittleman, Ame Sanders, Mark Hochgesang, Kathy Cocks, Eric Kettering, Steve Labkoff, Laura Marcial, Amy Price, Eric Pinaud, Emily Hadley.
Links and references
Academy Health’s Datapalooza 2025 Innovation Unfiltered: Evidence, Value, and the Real-World Journey of Transforming Health Care
Tableau a visual analytics platform
Practical AI in Healthcare podcast hosted by Steven Labkoff, MD
Episode
Proem
Here’s the thing about AI in healthcare—it’s like that friend who offers to help you move, then shows up with a sports car. The Iron Woman meant well, but it doesn’t quite meet your actual needs. I spent September 5th at Academy Health’s 2025 Datapalooza conference about AI in healthcare, ‘Innovation Unfiltered: Evidence, Value, and the Real-World Journey of Transforming Health Care. a is Academy Health’s strongest conference for people with lived experience. I’m grateful to Academy Health for providing me with a press pass, which enabled me to attend the conference.
I talked to attendees about how they use AI in their work and what keeps them up at night about AI. I recorded some of those conversations and the panels I attended. When I listened to the raw footage, I heard terrible recordings filled with crowd noise and loud table chatter, like dirty water spraying out of a firehose. Aghast, I thought, what is the story here? I was stumped. How can I make sense of this? I had to deliver something.
So, here’s how I use AI in my work as a podcaster/vlogger. I used the Auphonic app to clean up the audio and remove noise, and then the Descript app to create transcripts of all the recordings. I went into my Claude podcast Project (a Project is an ongoing thread with everything I’ve done with Claude for my podcast over the past three months). I attached the transcripts and prompted the AI platform to identify themes. OK, that was helpful, but dull. So, I prompted Claude to think like a tech-savvy teen with a sense of humor. Eureka! Now we’re getting somewhere. I edited heavily and then prompted Claude to identify clips of speakers that illustrated the themes. I used the Perplexity app for research. Finally, I did the last written edit with a polish from the Grammarly app.
For audio, I returned to the Descript app, found the recommended clips, and extracted them. Then I recorded a video of myself, again using Descript. Compilation editing of the video was done with the DaVinci app. I should give production credit to Auphonic, Claude, Descript, Grammarly, Perplexity, and DaVinci.
Paradox, Irony, Catch 22
Datapalooza 2025 showcased the health and care industry’s intense focus on Artificial Intelligence, whatever that means. My podcast acts as a Rosetta Stone to share the excitement of what I learn and deem important in my journey toward best health. How can we use AI safely? Let’s jump in with some lessons I learned.
Burying the Treasure to Keep It Safe
There’s a Data Privacy Paradox. The very health data that could benefit most from AI faces the most restrictions. Sushmita Macheri works with Medicare/Medicaid data—information about some of our most vulnerable populations—but can’t use AI to identify errors that could improve their care. Meanwhile, commercial entities are freely training AI on whatever data they can scrape. Therefore, the most sensitive and valuable healthcare data remains locked away while AI trains on potentially biased and unrepresentative information.
Sushmita Macheri: I work with healthcare, Medicare, and Medicaid data. I would like to upload the data so I can understand what errors I’m getting, but I’m unable to do that due to the restrictions we have at work. So, if I were able to upload one, let’s say, like a file that I am having errors with.
Health Hats: So, what kind of errors, like missing data, what are the errors that you notice?
Sushmita Macheri: I work with Tableau, mostly. Sometimes, if I’m having issues with a calculated field, I would like to upload that calculated field or the logic behind it in the calculator to try to understand what the error is, but I’m unable to do so. For me, it’s the biggest challenge.
Bias, Treating the Chart, Not the Patient
Bob Stevens points out a harsh irony: AI makes decisions about patients while being trained on data that intentionally excludes patient perspectives. The people most affected by AI decisions had the least input in training the systems. It’s like having a medical advisory board that leaves out doctors and patients, then questioning why the recommendations fail.
Bob Stevens: I am concerned about bias, as I mentioned, and that really worries me for two reasons. First, AI uses all available content, and as patients, we know that patient perspective content has not been well represented. Now, as AI starts making decisions based on this, all the content it has is just what’s available. It’s gathering it all. We haven’t been well represented in that process. So, it’s going to stay biased, right? Without patient information and the patient perspective, that creates a bias.
Bob Stevens: The second type of bias is related to how it’s designed. It’s not being general because it’s a technology, while they’re asking for patient input. There’s also bias in the design process because of who is doing the designing. So, you have two levels. One can be considered intentional, but the other is the accumulation of all this data that is there. We’re not represented in and haven’t been represented in. And how do we change that? The incremental change in the AI dataset is expected to take decades. What bothers me is that we are now relying on AI to assign a label that can then trigger a response or action.
Bob Stevens: That’s a high-risk moment, asking AI to make a decision that’s inherently high-risk. So what AI should always do is say. Here’s what I see. Now consider this when going in. And that brings us to the second part of a PCORnet study that I was involved in, which focused on the ER.
Information
- Show
- FrequencyUpdated Bimonthly
- PublishedOctober 6, 2025 at 12:26 PM UTC
- Length23 min
- RatingExplicit
