AI in medical decision making - computer says YES, patients say NO

Wild Health

AI in healthcare is only going to get bigger and new Macquaire University research reveals how to do it better.In this short podcast we hear from Associate Professor Paul Formosa from Macquarie University. He’s been researching how patients respond to AI making their medical decisions compared to how they respond if a human is involved.Professor Formosa says that patients see humans as appropriate decision makers and that AI is perceived as dehumanizing even when the decision outcome is identical.“There's this dual aspect to people's relationship with data. They want decisions based on data and they don't like it when data is missing. However, they also don't like themselves to be reduced merely to a number,” Professor Formosa says.There are key takeaways for designers and developers in the research.“It's important that people feel they're not dehumanized or disrespected as that will have bad implications for their well-being. They may also be less likely to adhere to treatments or take diagnosis seriously if they feel that way,” Professor Formosa says.The kind of data that is captured could provide the nuance required to shift negative perceptions AI decision making. Professor Formosa says that we also need to think about the broader context in which these data systems are being used.“Are they being used in ways that promote good health care interactions between patients and healthcare providers? Or are they just automatically relied on in a way that interferes with that relationship?” Professor Formosa asks.

Hosted on Acast. See acast.com/privacy for more information.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada