Computer says yes, but patient says no to automated decision making.
AI in healthcare is only going to get bigger, and new Macquarie University research reveals how to do it better.
In this podcast, we hear from Associate Professor Paul Formosa from Macquarie University. He’s been researching how patients respond to AI making their medical decisions compared to how they respond if a human is involved.
Professor Formosa says that patients see humans as appropriate decision makers and that AI is perceived as dehumanising; even when the decision outcome is identical.
“There’s this dual aspect to people’s relationship with data. They want decisions based on data and they don’t like it when data is missing. However, they also don’t like themselves to be reduced merely to a number,” Professor Formosa says.
There are key takeaways for designers and developers in the research.
“It’s important that people feel they’re not dehumanised or disrespected as that will have bad implications for their well-being. They may also be less likely to adhere to treatments or take diagnosis seriously if they feel that’s way,” Professor Formosa says.
The kind of data that is captured could provide the nuance required to shift negative perceptions AI decision making. Professor Formosa says that we also need to think about the broader context in which these data systems are being used.
“Are they being used in ways that promote good health care interactions between patients and healthcare providers? Or are they just automatically relied on in a way that interferes with that relationship?” Professor Formosa asks.
You can listen and subscribe to the show by searching for “Wild Health Podcasts Medical Republic” in your favourite podcast player.