Prove It

Paul Wicks

The show where digital health leaders stop hand-waving, start evidence-building, and reveal how extraordinary claims become extraordinary results. Each episode I’ll be joined by someone who’s had their ideas tested in the fire of peer review, regulators, and the real world.

에피소드

  1. 5일 전

    Hugh Harvey on Radiology AI Startups, Regulation, Evidence & the Only Way to Win in Health Tech

    In 2015 Geoffrey Hinton said radiologists would lose their jobs to AI within five years. It's 2026 and we're still short of radiologists. Dr. Hugh Harvey has a few thoughts on that. Paul Wicks sits down with Hugh, radiologist, ex-Babylon Health, Europe's first CE-marked deep learning mammography device, and now regulatory consultant to over 150 medical device companies, for a masterclass on why most healthcare AI companies fail, and what the ones that survive actually do differently. In this conversation they cover: 🏥 Why AI didn't replace radiologists, and what that really tells us about healthcare AI 📋 How software gets classified as a medical device (and why start-ups always try to game it) 🚩 "Class one washing": the shortcut that makes your product less useful AND gets you caught 🤖 AI-generated fake regulatory documentation — it's happening and it's worse than you think 💰 The health economics gap: why regulatory clearance is just the entry ticket, not the prize ⚠️ Digital thalidomide: why a small error rate at scale is an entirely different category of harm 🏆 What the companies that actually make it do from day one Plus: why there's no such thing as vibe-coding your way through medical device regulation. 🎙️ Host: Paul Wicks | ProofStack Health 🏢 Guest: Dr. Hugh Harvey, Regulatory Consultant, Hardian Health 🔗 Hardian Summit — April 29th, BMA House, London: visit Hardian's website for tickets 🔗 Learn more: https://www.proofstackhealth.com 📬 Subscribe to Proof Points for evidence frameworks built for digital health teams 00:00 — Introduction: The Magic Trick That Doesn't Work at Home 02:00 — Why Radiologists Still Have Jobs in 2026 (And What That Tells Us About All of Healthcare AI) 08:00 — From Babylon Health to Europe's First CE-Marked Deep Learning Device: Hugh's Journey 16:00 — How Software Gets Regulated as a Medical Device (Explained Simply) 26:00 — Class One Washing: The Shortcut That Backfires 36:00 — AI-Generated Fake Regulatory Documentation: A Growing Real Problem 42:00 — The Health Economics Gap: Why Regulatory Clearance Is Just the Entry Ticket 45:00 — Digital Thalidomide: Why Scale Changes Everything About Harm

    50분
  2. 3월 31일

    Can a Chatbot Really Care? Woebot's Founder on Therapeutic Bonds, CBT, and What LLMs Still Can't Do

    Before mental health chatbots were a trend, Dr. Ali Darcy was already running randomised controlled trials on one. In 2016, she and her collaborators at Stanford conducted what is widely regarded as the first ever formal RCT of a conversational AI for mental health. The paper has since been cited over 2,000 times. Her creation, Woebot, became one of the most studied and recognised digital mental health tools in the world. In this episode of Prove It, host Paul Wicks sits down with Ali for a wide-ranging, deeply honest conversation about what the science of AI mental health actually shows, and what it doesn't. They cover the engagement crisis in digital CBT, why humour is a legitimate and underused therapeutic tool, the surprising finding that users form a meaningful bond with a fictional robot character, and the critical question the field still hasn't answered: what exactly is the active ingredient of the human therapist? They also dig into the limits of large language models in mental health, why doubling NLP accuracy didn't improve outcomes, why "rushing in to help" is therapy's number one failure mode, and why role clarity may be the most important safety issue in AI mental health right now. This is a conversation between two researchers who refuse to hand-wave. If you care about digital mental health done right, this one is for your library. Topics covered: The 2016 Stanford RCT: design choices that still hold up nearly a decade laterEngagement as the primary unsolved problem in mental health techThe therapeutic alliance paper: 37,000 users, 200x larger than the entire existing literatureWhat Woebot got right by being transparently fictional rather than pretending to be humanWhy LLMs can generate profound-sounding insights that users forget within 24 hoursThe Goldilocks zone of evidence: how much is enough, and when does more actually hurt you?Role confusion as a safety crisis: why being a companion and a therapist is clinically dangerous 🎙️ Host: Paul Wicks | ProofStack Health 🏢 Guest: Dr. Ali Darcy, Founder, Woebot Health 🔗 Learn more: www.proofstack.health 📬 Subscribe to Proof Points, A biweekly newsletter on evidence that makes digital health work proofpoints.proofstack.health

    46분
  3. 3월 19일

    15 Years of Medical Rigour Over Hype: ADA Health's Daniel Nathrath, Building AI That Actually Works

    Digital health is littered with extraordinary claims that never collide with evidence. ADA Health is one of the rare exceptions. In this episode of Prove It, host Paul Wicks sits down with Daniel Nathrath, founder and CEO of ADA Health, to unpack what it actually takes to build medical-grade AI over 15 years without cutting corners on safety, rigour, or evidence. Daniel opens up about why he, a lawyer by training with a background in consumer internet, ended up building one of the world's most clinically validated symptom assessment platforms. From the early decision to build for doctors first, to the hard lesson that healthcare is nothing like e-commerce, to the moment his Berlin office building became global news when a giant aquarium exploded in the lobby, this is a candid conversation about what building carefully actually looks like when the stakes are patient safety. They cover: Why ADA chose to build a curated medical knowledge base over a pure LLM, and why that's now its biggest competitive advantage How ADA eliminates hallucinations entirely, and why pharmaceutical partners won't work with companies that can't say the same The "patient finder" model: shortening the diagnostic odyssey for rare disease patients from 7–10 years to something far more humane What 40 million assessments, a Class IIa medical device certification, and 20+ peer-reviewed papers actually mean in practice The real conversations Daniel plans to have at JP Morgan Healthcare, and why early patient identification is the future of pharma If you care about what medical-grade AI trust at scale actually looks like, this is the episode to study.

    43분
  4. 2월 26일

    The Secret Editor: The Brutal Truth About Peer Review, Profit, and “Bonkers” Publishing

    Medical publishing looks objective from the outside, but behind the curtain it’s a high-pressure, incentive-driven system where great science can get buried and mediocre work can slip through. In this special Prove It episode, Paul Wicks interviews a senior journal editor who remains anonymous (“The Secret Editor”) and explains how papers really move through the sausage factory: desk decisions, editorial boards, associate editors, peer review overload, and the surprisingly human judgments that shape what gets published. They unpack the profit engine that reshaped scientific publishing, why “read the journal guidelines” is the #1 rule for getting accepted, and how the explosion in submissions has turned peer review into an arms race, ripe for gaming and burnout. Finally, they tackle AI’s impact on submissions and publishing workflows, and why the deeper fix may be changing academic incentives, not just blaming publishers. 00:00 Meet the “Secret Editor” 02:14 The Dirty Secret: Scientists Keep Handing Publishers Power 04:37 How One Deal (Pergamon Press) Helped Create Today’s Publishing Machine 06:39 From Conferences to Peer Review: Why Journals Became the Battleground 08:57 #1 Rule to Get Published: Read the F!*%$ing Guidelines 12:51 The Submission Game: Acceptance Rates, Timelines, and “Cascade” Journals 17:33 Desk Rejections, Human Bias, and the 60-Manuscript Speed-Run 21:10 Open vs Blind vs Double-Blind Peer Review (And Why None Is Perfect) 24:44 How Bad Actors Game Peer Review, and Why Paying Reviewers Gets Messy 31:31 “Where Is My Paper?” Inside the Nightmare Workflow (20 Queues, 250 Manuscripts) 36:12 AI in Publishing: Some Editors Embrace It… Others Reject Anything “Near AI” 43:49 Blow It Up or Fix Academia? The Real Incentives Behind the Chaos

    46분

소개

The show where digital health leaders stop hand-waving, start evidence-building, and reveal how extraordinary claims become extraordinary results. Each episode I’ll be joined by someone who’s had their ideas tested in the fire of peer review, regulators, and the real world.

좋아할 만한 다른 항목