AMA is on the AI train

4 minute read


The association is calling for a health sector-specific regulatory framework which looks at risk on a per application basis for this ‘high-risk’ sector.


This week, the AMA is on the tech train – releasing two submissions on artificial intelligence (and one on assistive technologies).

The need for innovative solutions to relieve an overburdened workforce, balanced with the necessity to maintain a leaning towards risk aversion – given the delicate nature of the product (human health and health data) – has contributed to a paradoxical dialogue around AI and its place in the health sector.

While AI is already incorporated in many corners of the sector, concerns remain over regulation – as is always the case – being one step (if not more) behind.

As such, the Department of Health and Aged Care opened a consultation last month into the framework around AI.

“There are concerns that current legislative and regulatory frameworks do not adequately mitigate potential for harm,” the department said in the consultation report.

It its response to the Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review, the AMA argued that the “high-risk” health sector required a tailored, tiered regulatory framework which looked at risk on a per application basis.

“In our submission to the Department of Industry, Science and Resources consultation on proposed mandatory guardrails, the AMA recommended the government implement ‘framework legislation, with associated amendments to existing legislation’, as a whole-of-government approach to supporting proactive, industry-specific regulation of AI,” the report said.

“This approach acknowledges the effective regulatory structures already in place and focusses on establishing new, pre-market safeguards and risk-mitigation measures specific to AI application in healthcare.

“A framework approach seeks harmonisation between sector-specific legislative instruments dealing with the management of AI risks to build national consistency, with standardised regulatory terminology and powers across jurisdictions and industries.”

The healthcare-specific regulation should be integrated within current regulatory frameworks, such as the TGA’s role in regulating medical devices, and complement the broader national regulatory mechanisms for AI.

The group called for a dedicated body of clinicians, medical professionals, consumers and tech developers to support regulation.

The AMA argued that regulation should align with the principle that clinical independence should never be compromised AI, nor should it be used by non-medical professionals to second guess clinical decisions.

Responsibility and accountability for any error caused by AI must also be clear, said the association.

Data privacy must be protected and data use for machine learning must be inclusive and representative to mitigate bias.

The association’s other AI-centric submission this month – to the TGA consultation on clarifying and strengthening the regulation of AI – focused on regulation of medical devices under the TGA’s jurisdiction that may incorporate AI.

The AMA said the TGA would be instrumental in implementing health sector-specific regulatory strategies for AI.

“The AMA has previously stated that reliance on technology innovators’ self-regulation poses an unacceptable risk to consumers and medical professionals,” said the association.

“Progress in the AI space will be largely driven by free market, profit-driven entities and these must be subject to strict accountability and clear parameters for the ethical and safe use of AI in healthcare.”

While confident in the TGA’s “robust system”, AI is a special needs care, said the AMA.

“Existing TGA classification rules have provided a sound basis for managing innovation in medical devices,” said the association.

“However, the nature of AI requires broader regulation, adaptable to rapid change.

“Many software devices will incorporate AI components over time.

“The AMA reiterates the exclusion of certain software from regulation requires re-evaluation due to increasing complexity.

“Data, the power source of the AI ecosystem, crosses jurisdictions and contributes to system complexity.

“Contributing to data governance should also be of central concern in the regulation of AI-using software as a medical device (SaMD).”

The AMA said the TGA’s mechanisms to impose stricter requirements on higher-risk products should be leveraged to support the “responsible deployment of AI within medical practice”.

End of content

No more pages to load

Log In Register ×