In its first position statement on the use of artificial intelligence in medicine, the AMA puts the bots back in their box.
While the AMA admits that AI has the potential to change Australian healthcare for the better, it’s urging the government to put robust rules in place.
In a new position statement on AI in healthcare – which is separate to its recent submission to the Department of Industry, Science and Resources’ discussion paper on AI – the association calls for more testing and accountability of the new technology.
“Tools which use AI in healthcare must ensure inclusiveness and equity for all, irrespective of race, age, gender, socioeconomic status, physical ability or any other determinant,” the position statement reads.
The position statement, which will be released publicly in the coming days, marks the first time that the AMA has taken an official stance on AI.
It also advises against using unproven AI technology during an emergency situation like a pandemic or disaster response.
“The urgency of need must not be used as justification for application of unproven technologies,” the statement says.
The AMA recommends a national governance structure that could advise on policy development and include health-sector stakeholders alongside industry representatives, consumers and legal experts.
A sort of TGA for AI, if you will.
It also recommends that, at the very least, the TGA have involvement in assessing any new AI-based clinical tools against the requirements for registration as a medical device and specified that AI applications be trained on a relevant target population.
“This will underpin how we carefully introduce AI technology into healthcare,” AMA president Professor Steve Robson said.
“AI tools used in healthcare must be co-designed, developed and tested with patients and medical practitioners and this should be embedded as a standard approach to AI in healthcare.
“Decisions about healthcare are the bedrock of the doctor-patient relationship and these will never be replaced by AI.”
Where regulation will also come in handy, said Professor Robson, is in protecting patient data.
AI is well-known to reflect and amplify existing social biases, and another strong recommendation from the association is to inform patients when a diagnosis or recommended course of treatment was determined by AI.
“People worry when they hear that machine learning is perfecting decision-making, but this is not the role AI should play in healthcare,” Professor Robson said.
“Diagnoses, treatments and plans will still be made by medical practitioners with the patient – AI will assist and supplement this work.”