Thank you Edward M. DelSole, MD, Danny Goldenberg, Audley Mackel III, Darren Michael, and many others for tuning into my live video with Hadi Javeed! Join me for my next live video in the app. The Skill Is the New Workflow Clinical AI won’t scale through better models. It will scale through better instructions. Interested in deploying clinical AI for your practice, value-based care organization, or health system? RevelAi Health partners with clinics and health systems to build AI workflows for CMS models (TEAM, ASM, ACCESS), care coordination, and clinical operations. We bring the software, the clinical expertise, and the AI-fluent staff to deliver outcomes, not just tools. Schedule a demo or reach us directly at hello@revelaihealth.com. A billboard on Market Street in San Francisco advertises “skills,” the hot new paradigm in AI development. Walk three blocks in any direction and you’ll find someone who can explain, in considerable detail, what a skill is, why it matters, and which framework implements it best. Fly to any hospital in the country and ask the same question. You’ll get a blank stare. This gap between what AI can do and what healthcare is doing with it has become the defining tension of clinical AI’s current era. The models are smart. Sixty-six percent of physicians now report using health AI tools, a 78% increase from 2023. Billions have been invested. Yet nearly four years after the ChatGPT moment, most large health systems still haven’t deployed a single patient-facing AI application beyond ambient documentation. The question is why. And the answer, increasingly, points not to the intelligence of the models but to the architecture around them: the instructions, the context, the workflows that translate raw capability into clinical utility. If you like deep dives on clinical AI and health policy, consider becoming a free or paid subscriber to Techy Surgeon! The Context Problem Nobody Wants to Admit This past weekend, my co-founder Hadi and I sat down for what we’ve been calling Founders Coffee, a live conversation on Substack about what we’re seeing in clinical AI, what’s working, what isn’t, and what comes next. Hadi brings a particular vantage point: before we started RevelAi Health together, he was one of the earliest applied AI engineers at Capital One, building voice AI for banking back in 2016, when the technology was, as he puts it, “not that cool and less practical.” The lessons from that era are uncomfortably relevant now. At Capital One, text-based chatbots found product-market fit. Voice did not. It got dates of birth wrong. It misread credit card numbers. And the core insight that emerged, one that the current wave of healthcare AI companies would do well to internalize, was deceptively simple: people hate chatbots. Not because the technology is bad, but because it fails to deliver unique value. Empathy for the sake of empathy, as Hadi noted, does not work. People engage with AI when it solves their problem. They disengage quickly, permanently, when it doesn’t. “People only would chat to a chatbot if it solves their needs,” Hadi said. “As long as the chatbot is not providing unique value, it does not work.” This observation lands differently in 2026 than it would have in 2016. Today, the models are dramatically more capable. But capability without context is just expensive latency. And in healthcare, the context lives behind a walled garden. The data gravity (the patient charts, the encounter histories, the medication lists, the imaging orders) sits in electronic health records. Epic. Cerner. Athena. And without that context flowing securely into AI systems, even the most sophisticated models are left prompting in the dark. As one survey found, hospitals on Epic had roughly 90% AI usage, while those on smaller EHR platforms averaged just 50%, a disparity that reveals how tightly AI adoption is coupled to infrastructure access. “AI is not the bottleneck,” Hadi argued. “It’s the context that’s the bottleneck right now. Models are pretty smart. But if you cannot get patient chart information securely into AI, you can do only enough.” What an AI “Skill” Means for Healthcare A skill, in this context, is a structured set of instructions that teaches AI how to perform a specific task when triggered by specific conditions. Think of it less as a prompt and more as a protocol manual for a very capable but context-dependent assistant. A prompt says: summarize this note. A skill says: whenever a patient mentions diabetes in an encounter, trigger a downstream workflow. Draft dietary counseling documentation for the staff. Generate a glucose monitoring plan. Prepare a patient-facing message at an appropriate reading level. Format all outputs according to this template. Ground clinical recommendations in these evidence-based guidelines. Hadi framed the clinical application nicely: “Healthcare workflows are very if-then-else logic. If BMI is 30, do this. If they have diabetes, go on this path. And traditionally with software systems, it was so hard to scale healthcare because who’s going to build this if-then-else logic? You’re going to rely on your dev team or maybe Epic consultants, and that takes forever.” Skills collapse that timeline. They translate clinical protocols (the ones that live in binders, in the heads of experienced nurses, in institutional memory that evaporates with staff turnover) into executable AI instructions. And critically, they can be built by clinicians, not engineers. You describe your workflow conversationally. The AI interviews you, iterates, produces the skill. You test it against real examples and refine. Looking to understand Claude’s skills better and see real-life examples? Check out this article below on meta-prompting (full article with in depth walkthrough) Consider the practical applications that emerged from our conversation: a pre-clinic screening skill that reviews a panel of patients before Monday morning, flags missing imaging orders, and surfaces relevant history in a style you specify. A prior authorization appeal skill that ingests a denial letter and produces a structured response matching the format that has historically succeeded with a specific payer. An independent medical examination skill that parses 6,000 pages of records into a timeline of treatment, imaging, and interventions, work that currently requires hours of manual review or a dedicated team. These aren’t hypothetical. We’re building and deploying versions of these at RevelAi Health right now, integrated with EHR data through FHIR resources, with the clinical team able to customize and test skills through a user interface rather than filing engineering tickets. The Compliance Reckoning There’s another thread from our conversation worth pulling. Earlier this month, allegations surfaced that Delve, a Y Combinator-backed compliance startup that had raised $32 million, allegedly generated 494 fabricated SOC 2 Type II reports for its clients. The reports were 99.8% identical boilerplate, with pre-written auditor conclusions filed before companies even submitted their evidence. The auditors Delve marketed as “US-based CPA firms” were traced to offshore operations using virtual addresses. The revelation emerged, almost poetically, because someone left a Google spreadsheet open to the internet. For health tech, this extends beyond a compliance scandal to become an ecosystem problem. Hundreds of companies, including health tech startups handling protected health information, may now hold invalid security certifications. The ripple effects will tighten an already rigorous procurement environment at a moment when health system CIOs were only beginning to open the door to smaller vendors. “You can’t outsource security responsibility,” Hadi said. “If someone is trusting you with their patient data, you have a huge responsibility to protect it. Security and compliance is not a cost center. It’s the most important foundational thing you have to do.” We felt the FOMO ourselves at RevelAi. We went through Vanta, checked every box, invested heavily in governance, and watched competitors claim they completed SOC 2 Type II in three weeks. The temptation to move faster was real. But in healthcare, the “move fast and break things” mantra will also break your company. We’ve watched it happen. Babylon, once valued at $4.2 billion, collapsed in 2023. Olive AI, valued at $4 billion, shut down the same year. The outward appearance of success, it turns out, is often inversely correlated with the rigor underneath. Curious about the tools that I use to put together Techy Surgeon and leverage AI to improve my personal productivity? Check out my article below — The Clinician Founder’s AI Stack. Where the Bridges Are Being Built Not everything is stalled. The interoperability landscape is shifting, unevenly but meaningfully. Athena has emerged as an unlikely leader. At HIMSS 2026, the company previewed an industry-first Model Context Protocol server, infrastructure that allows AI agents to securely access patient chart data in real time. They’re building athenaConnect, an intelligent interoperability layer connecting 170,000 providers serving 20% of the U.S. population. This matters enormously. Model Context Protocol (MCP) is what makes skills practical at scale. It’s the plumbing that lets an AI agent not just follow instructions but access the clinical context those instructions require. When Hadi built a FHIR integration with Cerner’s proprietary APIs, it took him one hour using skill-based development. Previously, that work took two weeks. That’s the offline version, engineers using skills to accelerate code. The online version, where skills execute in real time against live patient data, is coming but isn’t here yet in production. Anthropic, notably, has published a FHIR skill on their marketplace, the