We are in danger of burning while the Australian government consults, says the Australian Institute of Digital Health.
Australia risks lagging behind Europe, leaving us âvulnerable and exposedâ to risk of artificial intelligence misuse while the government continues to âconsultâ, says the boss of the AIDH.
Anja Nikolic, CEO of the Australian Institute of Digital Health, says she is particularly concerned about the use of âdeep fakeâ avatars in clinical settings and AI chatbots and is urging the government to act more quickly.
âWe donât know the full extent to which AI can cause harm in healthcare, but we know that Australia must act swiftly, as Europe has done,â said Ms Nikolic.
âTwenty-seven European countries were able to collaborate on the AI Act, and yet the Australian government is still consulting.
âWithout proper protections, Australians are left vulnerable and exposed to the risks of AI misuse.â
The European Union’s Artificial Intelligence (AI) Act establishes a legal framework for the development, use, and marketing of AI in the EU. It’s the world’s first comprehensive AI regulation and came into force in August 2024.
The AIDH has made a submission to the Department of Industry, Science and Resources consultation on safe and responsible AI in Australia, in which it emphasised that the potential harms caused by AI in healthcare settings could be âdifficult or impossible to reverse, nor can they be appropriately compensatedâ.
It was not enough to rely on the guardrails provided by Australiaâs international human rights obligations, said Ms Nikolic.
âThe risk of adverse impacts should be recognised in Australian law,â she said.
The AIDH has called for a list of high-risk AI settings and high-risk AI models to provide clarity and certainty for stakeholders.
âBy identifying high-risk applications such as emerging AI in healthcare decision making, regulators could help organisations and patients better understand the requirements and risks associated with these technologies,â said Ms Nikolic.
âConcrete examples and clear guidelines would promote a culture of responsibility among developers and users.
âWe are particularly concerned about the use of AI to create avatars based on images and voices depicting or replicating real people without their consent use of what can be considered as âdeep fakesâ in clinical settings â and in any situation, especially in mental health, where information, advice or treatment is involved â presents an unacceptable level of risk.
âDigital counterfeits can deceive consumers and cause them to act on unverified and misleading health information.
â[We are] urging the government to consider banning AI chatbots purporting to provide medical advice or clinical mental health support; along with AI algorithms that autonomously diagnose diseases from medical imaging or other data,â she said.
Related
The AIDH has made three recommendations to the DISR consultation. They are:
- Clear information and training: The government should provide clear information, free training, and education to support small to medium-sized businesses (SMBs) like health practices, clinics, and software/app vendors. These SMBs already operate in a highly regulated environment, so additional support is crucial to avoid unnecessary regulatory burdens;
- Funding for AI adoption: The government should increase and continue funding for the AI adoption program, particularly focusing on healthcare settings;
- Legislative progress: Itâs critical for the Australian Government to advance future legislative work quickly and with bipartisan support. Delays could cause Australia to fall behind international best practices, increasing the risk of harm from AI misuse.
The full AIDH submission can be read here.