Dev and Doc: AI For Healthcare Podcast

Dev and Doc
Dev and Doc: AI For Healthcare Podcast

Bringing doctors and developers together to unlock the potential of AI in healthcare. Together, we can build models that matter. 🤖👨🏻‍⚕️ Hello! We are Dev & Doc, Zeljko and Josh :) Josh is a training Neurologist in the NHS, and AI researcher in St Thomas' hospital and King's College Hospital. Zeljko is an AI engineer, and post-doctoral researcher in King's College London, as well as a CTO for a natural language processing company. ------------- Substack- https://aiforhealthcare.substack.com/ YT - https://youtube.com/@DevAndDoc

  1. 20 SEPT

    #23 Can OpenAI's GPT o1 solve complex medical problems?

    First Thoughts and Preliminary Insights into OpenAI's GPT o1 Strawberry in the Medical Domain With some expected and unexpected findings, we have a "bake off" between o1 and Doc to demonstrate how o1 fares with tricky medical scenarios. Disclaimer Obviously, don't use AI to diagnose or treat your medical problems. If you are unwell, please seek a medical professional (AI isn't good enough just yet :)). 👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) Contributors • 👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ • 🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr Follow Us • https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7216474068085026817 • https://youtube.com/@DevAndDoc • https://podcasters.spotify.com/pod/show/devanddoc • https://podcasts.apple.com/gb/podcast/dev-and-doc-ai-for-healthcare-podcast/id1751495120 • https://aiforhealthcare.substack.com/ For enquiries - 📧 mailto:Devanddoc@gmail.com Team • 🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/ • 🎨 Brand Design and Art Direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d Timestamps • 00:00 - Start + Highlights • 01:28 - Intro, What is GPT o1? • 05:18 - What is "Reasoning" in o1? • 12:38 - Benchmarks: o1's Successes and Failures • 24:07 - o1 and Doctor Bake Off! • 24:21 - The Pregnancy Acid Test for LLMs • 26:23 - Clinical Coding • 30:06 - Tricky Patient Scenarios • 32:25 - Opioid Dose Conversions

    40 min
  2. 15 AUG

    #22 Explaining Explainable AI (for healthcare) with Dr Annabelle Painter (RSM digital health section Podcast)

    Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples. Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb 👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr LinkedIn Newsletter YouTube Channel Spotify Apple Podcasts Substack For enquiries - 📧 Devanddoc@gmail.com 🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/ 🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d Timestamps: 00:00 - Start + highlights 03:47 - Intro 08:16 - Does all AI in healthcare need to be explainable? 15:56 - History and explanation of Explainable/Interpretable AI 20:43 - Gradient-based saliency and heat maps 24:14 - LIME - Local Interpretable Model-agnostic Explanations 30:09 - Nonsensical correlations - When explainability goes wrong 33:57 - Modern explainability - Anthropic 37:15 - Comparing LLMs with the human brain 40:02 - Clinician-AI interaction 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know" References: Fun Examples of when models go wrong - Nonsensical correlations Mechanistic interpretability Anthropic - Mapping the mind of language models Limitations of current AI explainability approaches Explainability does not improve automation bias in radiologists

    59 min
  3. 2 AUG

    #21 Foundational Models in Digital Pathology: Enhancing Cancer detection and outcomes

    An explainer on Foundation models for pathology, from Microsoft's Gigapath to Owkin's H-optimus-0, every company, big or small, are building pathology AI models. In this episode, Doc talks to Sean M. Hacking, assistant professor in Pathology at NYU Grossman School of Medicine and Özgür Şahin, particle physicist at CERN. Together they are building the infrastructure for digital pathology that then allows training of pathology foundational models. Find out more at https://www.pathonn.com/. 👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7216474068085026817 https://youtube.com/@DevAndDoc https://podcasters.spotify.com/pod/show/devanddoc https://podcasts.apple.com/gb/podcast/dev-and-doc-ai-for-healthcare-podcast/id1751495120 https://aiforhealthcare.substack.com/ 👨🏻‍⚕️Doc - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - https://twitter.com/zeljkokr 🎞️ Editor - https://www.instagram.com/dragan_kraljevic/ 🎨 Brand design and art direction - https://www.behance.net/anagrigorovici027d 00:00 Introduction 03:28 Why pathology 06:42 Transporting slides is a logistical nightmare 13:20 When particle physics and AI pathology collide 17:55 AI digital pathology - Patch-based architecture and sparse topologies 27:09 Is there enough pathology data? 29:11 Microsoft and Gigapath, transformer models for pathology 33:55 Pathology models clinical applications 43:18 Staining applications of AI 49:22 Building a digital pathology startup - Patho-NN 57:36 Using AI to see tumor grading features that humans can’t see References: https://www.nature.com/articles/s41586-024-07441-w https://www.microsoft.com/en-us/research/blog/gigapath-whole-slide-foundation-model-for-digital-pathology/ https://www.nature.com/articles/s41379-021-00919-2

    1h 2m
  4. 4 JULY

    #19 Tracking health with technology and AI - demystifying digital biomarkers

    Dev and Doc deconstruct digital biomarkers! This is a fascinating and nascent field in the world of medicine, how have biomarkers transformed the way we practice medicine, and how will AI and wearables, sensors and digital fingerprints transform the way we practice in the future? Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) find us on youtube- @Dev and Doc 📙Substack: https://aiforhealthcare.substack.com/👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d Timestamp 00:00 highlights 01:50 intro 02:40 how biomarkers evolved in the last century 6:02 what is the definition of a biomarker 10:00 biomarkers can be very biased depending on who you are testing 12:31 when does a test become a biomarker 17:30 the digital age and measurements - AI vision in retina scans, digital stethoscopes 23:50 what is an “analog” biomarker vs digital biomarker? 30:10 where do biomarkers fail in evidence based medicine? 34:55 Biomarkers are pretty poor for mental health 47:57 can AI predict depression better than humans? 51:21 Digital biomarkers to detect movement disorders 01:00:04 this can change clinical trials forever Refs - variable definitions of biomarkers https://informatics.bmj.com/content/31/1/e100914 -digital biomarkers convergence nature paper https://www.nature.com/articles/s41746-022-00583-z -digital stethoscope for heart failure https://www.thelancet.com/pdfs/journals/landig/PIIS2589-7500(21)00256-9.pdf -touch screen typing depression paper https://www.nature.com/articles/s41746-022-00583-z - Duchennes body suit biomarker https://www.nature.com/articles/s41591-022-02045-1#Sec9 - Friedreichs ataxia body suit https://www.nature.com/articles/s41591-022-02159-6?fromPaywallRec=false#Sec9

    1h 4m
  5. 30 MAY

    #18 Keith Grimes - Startups and doctors, HealthTech consulting, Babylon's demise, Leadership theory

    Dr Keith Grimes is a HealthTech consultant and General Practitioner working with companies to transform clinical ideas into something impactful. He worked as the digital health director in Babylon Health prior to its demise, and currently runs his own consulting firm, Curistica. This is one not to miss! References HealthTech consulting at Curistica www.curistica.com Prof Amanda Goodall on leadership theory https://amandagoodall.com/ For those interested in Leadership opportunities: -Faculty of medical leadership and management https://www.fmlm.ac.uk/ -Bite labs https://www.bitelabs.io/ Dev&Doc is a podcast where doctors and developers deep dive into the potential of AI in healthcare. 👨🏻‍⚕️Doc - Dr. Joshua Au Yeung 🤖Dev - Zeljko Kraljevic LinkedIn Newsletter YouTube Spotify Apple Substack For enquiries - 📧 Devanddoc@gmail.com 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d Timestamps 00:00 start 1:10 Career career career - GP, babylon health, digital consultancy 6:40 working as a rural GP in Scotland 9:21 time is the biggest factor of clinical impact 12:11 finding impact through data  21:29 leading by example  23:52 Should doctors be leading healthtech businesses?  30:10 why do healthtech start-ups not have clinicians earlier?  36:30 Babylon failure - importance of having clinical influence at the top  43:55 experience being grilled on BBC newsnight  49:45 lessons learnt from the downfall of Babylon  52:25 6 values of consulting firm Curistica  55:51 common problems in start ups  59:36 how AI will change the healthcare landscape

    1h 10m
  6. 9 MAY

    #17 How to build a clinically safe Large Language Model - Hippocratic AI, Llama3, Biollama

    How do we reach the holy grail of a clinically safe LLM for healthcare? Dev and Doc are back to discuss news with Meta's LlaMA model and potential of healthcare LLMs finetuned on top like BioLlaMa. We discuss the key steps in building a clinically safe LLM for healthcare for healthcare and how this was pursued by Hippocratic AI's latest model - Polaris. 👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr The podcast 🎙️ 🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc 📙Substack: https://aiforhealthcare.substack.com/ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d References Hippocratic AI LLM- https://arxiv.org/pdf/2403.13313 BioLLM tweet - https://twitter.com/aadityaura/status/1783662626901528803 Foresight lancet paper -https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00025-6/fulltext Linear processing units- https://wow.groq.com/lpu-inference-engine/ Timestamps 00:00 Start 01:10 Intro- llama3 , a chatGPT level model in our hands 06:53 Linear processing units to run LLMs 09:42 BioLLM for medical question and answering 11:13 quality and size of dataset, using youtube transcripts 12:41 Question and answering pairs do not reflect the real world - holy grail of healthcare llm 18:43 Dev has Beef with hippocratic AI 20:25 Step1 Training a clinical foundational model from scratch 22:43 Step 2 Instruction tuning with multi-turn simulated conversation 24:15 Step 3 training the model to guide model in tangential conversations 27:42 Focusing on the hospital back office and specialist nurse phone calls 33:02 Evaluating Polaris - clinical safety LLM , bedside manner, medical safety advice

    43 min
  7. 21 MAR

    #16 Dev&Doc x Rewired - LLMs, Clinical foundation models and automating administrative tasks (live)

    In this special episode we share a live recording of our live podcast episode at the Rewired UK conference, where NHS, industry and policy markers unite. We discuss current LLMs from a technical and practical perspective. Dive into how to build Foundational models for the National health service and our experiences. We were also privileged to be joined by head of digital at Cambridge University Hospital NHS trust, Dr. Wai Keong Wong on how to evaluate AI products and discussions on automating administrative tasks for clinicians with ambient clinical documentation. 👨🏻‍⚕️Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/ 🤖Dev - Zeljko Kraljevic https://twitter.com/zeljkokr The podcast 🎙️ 🔊Spotify: https://podcasters.spotify.com/pod/show/devanddoc 📙Substack: https://aiforhealthcare.substack.com/ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :) 🎞️ Editor- Dragan Kraljević https://www.instagram.com/dragan_kraljevic/ 🎨Brand design and art direction - Ana Grigorovici https://www.behance.net/anagrigorovici027d 00:00 intro 02:05 AI vs doctors - are language models ready to replace doctors? 05:22 the tranformer models and attention 08:51 human labour for reinforcement learning 11:00 building the NHS LLM, key concepts 13:55 foresight GPT - predicting the next clinical event in a patient timeline. 16:29 is text enough? 17:19 £3.8B investment into NHS digitisation and admin automation - ambient clinical documentation 20:14 how do you evaluate AI products for the NHS? 26:24 how do you vet the tech companies and future proof your purchase? 27:23 do clinicians need more digital health education? 28:41 transparency of AI models and benchmarks 31:30 question - EHR data created by AI leads to homogenisation and errors 34:03 question - training on structured vs unstructured EHR data 38:06 question - LLMs as a brain. How do we give it a body? 41:05 framework for ai deployment

    47 min

About

Bringing doctors and developers together to unlock the potential of AI in healthcare. Together, we can build models that matter. 🤖👨🏻‍⚕️ Hello! We are Dev & Doc, Zeljko and Josh :) Josh is a training Neurologist in the NHS, and AI researcher in St Thomas' hospital and King's College Hospital. Zeljko is an AI engineer, and post-doctoral researcher in King's College London, as well as a CTO for a natural language processing company. ------------- Substack- https://aiforhealthcare.substack.com/ YT - https://youtube.com/@DevAndDoc

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign-in or sign-up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada