Ban deep fakes and AI chatbots, AIDH urges government

3 minute read


We are in danger of burning while the Australian government consults, says the Australian Institute of Digital Health.


Australia risks lagging behind Europe, leaving us “vulnerable and exposed” to risk of artificial intelligence misuse while the government continues to “consult”, says the boss of the AIDH. 

Anja Nikolic, CEO of the Australian Institute of Digital Health, says she is particularly concerned about the use of “deep fake” avatars in clinical settings and AI chatbots and is urging the government to act more quickly. 

“We don’t know the full extent to which AI can cause harm in healthcare, but we know that Australia must act swiftly, as Europe has done,” said Ms Nikolic. 

“Twenty-seven European countries were able to collaborate on the AI Act, and yet the Australian government is still consulting.  

“Without proper protections, Australians are left vulnerable and exposed to the risks of AI misuse.” 

The European Union’s Artificial Intelligence (AI) Act establishes a legal framework for the development, use, and marketing of AI in the EU. It’s the world’s first comprehensive AI regulation and came into force in August 2024. 

The AIDH has made a submission to the Department of Industry, Science and Resources consultation on safe and responsible AI in Australia, in which it emphasised that the potential harms caused by AI in healthcare settings could be “difficult or impossible to reverse, nor can they be appropriately compensated”. 

It was not enough to rely on the guardrails provided by Australia’s international human rights obligations, said Ms Nikolic. 

“The risk of adverse impacts should be recognised in Australian law,” she said. 

The AIDH has called for a list of high-risk AI settings and high-risk AI models to provide clarity and certainty for stakeholders. 

“By identifying high-risk applications such as emerging AI in healthcare decision making, regulators could help organisations and patients better understand the requirements and risks associated with these technologies,” said Ms Nikolic.  

“Concrete examples and clear guidelines would promote a culture of responsibility among developers and users. 

“We are particularly concerned about the use of AI to create avatars based on images and voices depicting or replicating real people without their consent use of what can be considered as ‘deep fakes’ in clinical settings – and in any situation, especially in mental health, where information, advice or treatment is involved – presents an unacceptable level of risk. 

“Digital counterfeits can deceive consumers and cause them to act on unverified and misleading health information. 

“[We are] urging the government to consider banning AI chatbots purporting to provide medical advice or clinical mental health support; along with AI algorithms that autonomously diagnose diseases from medical imaging or other data,” she said. 

The AIDH has made three recommendations to the DISR consultation. They are: 

  • Clear information and training: The government should provide clear information, free training, and education to support small to medium-sized businesses (SMBs) like health practices, clinics, and software/app vendors. These SMBs already operate in a highly regulated environment, so additional support is crucial to avoid unnecessary regulatory burdens; 
  • Funding for AI adoption: The government should increase and continue funding for the AI adoption program, particularly focusing on healthcare settings; 
  • Legislative progress: It’s critical for the Australian Government to advance future legislative work quickly and with bipartisan support. Delays could cause Australia to fall behind international best practices, increasing the risk of harm from AI misuse. 

The full AIDH submission can be read here

End of content

No more pages to load

Log In Register ×