7 min

AI in medical decision making - computer says YES, patients say NO Wild Health

    • Health & Fitness

AI in healthcare is only going to get bigger and new Macquaire University research reveals how to do it better.
In this short podcast we hear from Associate Professor Paul Formosa from Macquarie University. He’s been researching how patients respond to AI making their medical decisions compared to how they respond if a human is involved.
Professor Formosa says that patients see humans as appropriate decision makers and that AI is perceived as dehumanizing even when the decision outcome is identical.
“There's this dual aspect to people's relationship with data. They want decisions based on data and they don't like it when data is missing. However, they also don't like themselves to be reduced merely to a number,” Professor Formosa says.
There are key takeaways for designers and developers in the research.
“It's important that people feel they're not dehumanized or disrespected as that will have bad implications for their well-being. They may also be less likely to adhere to treatments or take diagnosis seriously if they feel that way,” Professor Formosa says.
The kind of data that is captured could provide the nuance required to shift negative perceptions AI decision making. Professor Formosa says that we also need to think about the broader context in which these data systems are being used.
“Are they being used in ways that promote good health care interactions between patients and healthcare providers? Or are they just automatically relied on in a way that interferes with that relationship?” Professor Formosa asks.

Hosted on Acast. See acast.com/privacy for more information.

AI in healthcare is only going to get bigger and new Macquaire University research reveals how to do it better.
In this short podcast we hear from Associate Professor Paul Formosa from Macquarie University. He’s been researching how patients respond to AI making their medical decisions compared to how they respond if a human is involved.
Professor Formosa says that patients see humans as appropriate decision makers and that AI is perceived as dehumanizing even when the decision outcome is identical.
“There's this dual aspect to people's relationship with data. They want decisions based on data and they don't like it when data is missing. However, they also don't like themselves to be reduced merely to a number,” Professor Formosa says.
There are key takeaways for designers and developers in the research.
“It's important that people feel they're not dehumanized or disrespected as that will have bad implications for their well-being. They may also be less likely to adhere to treatments or take diagnosis seriously if they feel that way,” Professor Formosa says.
The kind of data that is captured could provide the nuance required to shift negative perceptions AI decision making. Professor Formosa says that we also need to think about the broader context in which these data systems are being used.
“Are they being used in ways that promote good health care interactions between patients and healthcare providers? Or are they just automatically relied on in a way that interferes with that relationship?” Professor Formosa asks.

Hosted on Acast. See acast.com/privacy for more information.

7 min

Top Podcasts In Health & Fitness

Huberman Lab
Scicomm Media
The Doctor's Farmacy with Mark Hyman, M.D.
Dr. Mark Hyman
On Purpose with Jay Shetty
iHeartPodcasts
ZOE Science & Nutrition
ZOE
Ten Percent Happier with Dan Harris
Ten Percent Happier
The School of Greatness
Lewis Howes