Bertalan Mesko says the peak body should be proactive, like medical associations in Canada and the US.
Leading healthcare bodies have weighed in on how AI should be regulated, but Australia may be too late to the party.Â
The recent government consultation on responsible use of AI was the second in two years and received submissions from a cross-section of the healthcare sector including medtech, private hospitals and digital health research.
However, it was the Australian Medical Associationâs submission that prompted a response from Bertalan Mesko, leading healthcare futurist, suggesting that Australia is falling behind.
Dr Mesko (PhD) said that medical associations in Canada and the US took it upon themselves to design regulations for âadvanced medical technologiesâ which allows policymakers sufficient time to bring those regulations into action.
âI always smile out of disappointment when a medical association âcalls for stricter regulations on healthcare AIâ, just like the Australian Medical Association did,” Dr Mesko wrote in a LinkedIn post.Â
âIt is your job to provide regulations. This is why some other medical associations have been working with professional futurist researchers like me to make it happen.â
Leading Australian experts also say that Australia lags most developed nations in its engagement with AI in healthcare and has done so for many years.
Professor Enrico Coeira from Macquarie University, Professor Karin Verspoor fromRMIT University, and Dr David Hansen (PhD) from the Australian EHealth Research Centre at the CSIRO wrote in the MJA that it was âa national imperativeâ to respond to the challenges of AI.Â
âWith AI’s many opportunities and risks, one would think the national gaze would be firmly fixed on it. [However] ⌠there is currently no national framework for an AI ready workforce, overall regulation of safety, industry development, or targeted research investment.Â
âThe policy space is embryonic, with focus mostly on limited safety regulation of AI embedded in clinical devices and avoidance of general purpose technologies such as ChatGPT,â the authors said.
Meanwhile, other countries with more advanced AI regulations and digital health infrastructure are leaping ahead. Israelâs largest acute care facility announced this week that it will be using a ChatGPT chatbot to help triage admissions.
The Tel Aviv Sourasky Medical Center has 1500 beds and nearly two million patient visits each year. The hospital will be using the clinical-intake tool created by Israeli startup Kahun to âfree up medical staffâ and prevent burnout by providing pre-visit summaries, diagnostic suggestions and next steps of care advice.
The Australian Department of Industry, Science and Technology called for submissions in June âto inform consideration across government on any appropriate regulatory and policy responsesâ for the safe and responsible use of AI.
Emma Hossack, chief executive of Medical Software Association (MSIA) said that any regulation needs to take a âGoldilocks approachâ. She said the right amount of regulation would not be too permissive and allow AI risks to go unchecked but would also be not too heavy. Ms Hossack said that regulation was a good thing as long it it was done promptly and in full consultation with industry.
âAn unregulated market, in an area where there’s risk, creates uncertainty of investment and development paths and uncertainty for a business case. This is because any time after an adverse event there’ll be new regulation applied which creates risk for business,â she said.
Ms Hossack said the TGAâs regulation of software-based medical device could be a template for new regulations about generative AI.
âThe principles applied, the legislation and then the guidelines were exemplary,â she said.
The MSIA said in its submission to that transparency on what AI was doing in any healthcare application was primary to build trust, âso that the provenance of AI outputs is appropriately managed in a risk-based frameworkâ.
It also called for thorough co-design and education, and an underpinning taxonomy for all medical software providers.
Private Healthcare Australia said in its submission that regulation of AI âshould not intrude on a fundâs ability to make commercially confidential decisions or engage in product development or service automationâ. The submission said that health funds which already used automated decision-making processes âmay increasingly use AIâ to assess and process insurance claims.
The Digital Health CRC put in a strong plug for a risk-based approach to regulation that âestablishes rules to govern the deployment of AI in specific use-cases but does not regulate the technology itselfâ.
Dr Stefan Harrer, the DHCRCâs chief innovation officer, said in a statement that new AI technologies âevolve at lightning speed, making it near impossible to generate the evidence base for risk mitigation at the same paceâ.
âOnly regulation that focuses on outcomes rather than technology will be able to keep up and adapt to changing conditions quickly and efficiently,â he said.
Dr Michael Bonning, president of the Australian Medical Association (AMA) NSW confirmed the AMA position as a supporter of technological advancement in healthcare as long as it served the doctor and patient, and didnât widen the inequity gap. He told TMR that good regulation was required as it builds trust in a system but that upholding good regulation might slow down access to AI-enabled solutions.Â
âThe general AI space is quite poorly regulated and, like most new technologies thereâs the [development model] of breaking things along the way and fixing it as you go. This is not something that we can tolerate on behalf of patients or practitioners in the Australian healthcare context,â he said.
The Australian Medical Technology Association of Australia and the Asia Pacific Medical Technology Association wrote in a joint submission that there was a need âfor regulation to evolve to address sector riskâ. They named the TGA, and other existing medical device regulators, as being best placed to incorporate emerging AI regulation within existing frameworks. They also endorsed codesign of any new codes.
âIt is important that substantial consultation with the medical technology industry occurs regarding any proposed regulation. Any broad regulation of AI, even regulation of AI aimed at the medical industry generally, could have unintended consequences for patient outcomes and the medical technology industry,â the submission said.
The Australian Alliance for AI in Healthcare met today with the specific goal to work out what policy was needed to ensure that Australia is positioned to take advantage of all the benefits of this technology.
Professor Coiera, director of the Australian Institute of Health Innovation at Macquarie University, is a founding member of the AAAiH. He said todayâs meeting was about bringing research, industry, government, clinical and consumer voices together to develop a national strategy.
âIt is imperative that we develop a national strategy for AI in healthcare in order to not only realise the enormous potential benefits, but to also identify and manage the risks.
âNo one part of the sector can do this on their own, we need a whole of system approach to be effective and sustainable. It is vitally important and exactly what AAAiH is leading.â
Professor Enrico said the AAAiH was strongly supportive of the Governmentâs active interest in developing industry appropriate policy and governance for AI.
The Department of Industry, Science and Technology has been asked to comment on what will be done with submissions to the current consultation on AI and also, what was actioned from the 2022 consultation on AI. They did not respond by publication deadline.