Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, "Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?" So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we