A new consultation paper aims to promote the safe and responsible use of artificial intelligence in the sector.
Artificial intelligence has already made its way into the Australian healthcare landscape in varying degrees with equally mixed responses.
While the Department of Health and Aged Care has acknowledged that AI can help solve urgent and emerging challenges in the healthcare system and support its workforce dedicate more time to delivering care, there remain some red flags around safety and risk.
“There are concerns that current legislative and regulatory frameworks do not adequately mitigate potential for harm,” according to the authors of a new report released by DoHAC this month.
This story first appeared on Health Services Daily. Read the original here, or sign up for your discounted GP subscription here.
The department opened up the Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review for public consultation, and the responses are already rolling in.
It said the paper built on the consultation on AI held by the Department of Industry, Science and Resources in 2023, and submissions to the Senate Select Committee on Adopting AI in 2024.
This week the Australian Alliance for Artificial Intelligence (AAAiH) hosted two webinars to discuss the review’s public consultation paper released this month. Another will be held next week.
The review authors said they had set out to “help people in Australia, including consumers and health care professionals, realise better outcomes through AI”.
“To do this, we need to support the safe use of AI and prevent harms occurring from AI in health care settings,” they wrote.
The department is seeking perspectives on a range of issues, including:
- The benefits to be gained by using AI in health care.
- The risks of AI to be managed in health care.
- Clarity on whether changes are required to ensure safe use – these could be regulatory changes or other kinds of changes such as education or guidelines.
- Other effects of AI on people, workflow, health care information, the health care system.
- Other perspectives that may not have been highlighted or emerged in the public domain.
In the 2024-25 Budget, the government provided funding to support safe and responsible AI development, including to clarify and strengthen existing legislation and regulation.
With healthcare identified as a priority area for reform, DoHAC is reviewing laws and regulations in the health portfolio in coordination with the Department of Industry, Science, and Resources.
As part of this, the department will consider the range of legislation that it administers, looking to answer three critical questions related to the safe and responsible use of AI in Australia’s health and care. These include:
- What about AI are we trying to regulate?
- Who is affected by AI and related regulation?
- How could we regulate to prevent AI harms and to enable benefits of AI?
The scope for the review includes clinical care, billing, insurance, digital systems, consent and privacy, health data, training, literacy and competency, and liability and responsibility.
There are a number of things outside the scope of this review as well, the authors wrote.
“This consultation paper does not cover the use of AI in the department,” the wrote.
“It also does not cover the regulation of therapeutic goods like software as a medical device (SaMDs). The Therapeutic Goods Administration (TGA) is conducting a consultation on therapeutic goods and AI.
“While the TGA consultation specifically relates to products that that come under the therapeutic goods framework, this overall consultation by the department covers practice and related issues across the whole portfolio of health and aged care.”
Speaking at today’s webinar, Lesley-Anne Farmer, AI Program Manager at DoHAC’s Digital Futures, Digital and Service Design Branch, said a key focus of the review was looking at legislation and regulation to find gaps with respect to AI or areas that could be clarified.
“As part of that, it’s really important to emphasise that we’ll be considering a range of different regulatory treatments, ranging from very low touch or non-regulatory measures such as education or guidelines or other options all the way through to things like changes to regulations, new rules or new legislation,” she said.
“All of those are potential options to be considered. And a really important principle of how we’re approaching this work is that we’re considering both the benefits and the risks.”
Ms Farmer said submissions would be published after the consultation period ended in October, but she did not have a timeline on when the final report would be completed and published.
She did say there had been “lots of interest in the consultation paper and we’ve had lots of people attend our first webinar and ask about the scope in the webinars … so we anticipate a lot of interest”.
She said the health workforce was a key consideration of the review.
“We are considering all of the aspects of the health workforce as well, and we certainly agree that it’s very important, and that the AI products do have huge implications for the health workforce,” she said.
Ethics was another key issue, Ms Farmer said in response to a question on the topic.
“We’re aware of the processes that are in place for research in terms of ethics and with the committees and those sorts of things,” she said.
“But this is a good area of where we will look to see if there is any kind of gap emerging that we need to consider and think about whether some further regulatory measures should be proposed.
“So we certainly do consider that, and things like pilots and how that could work, those are all things that we are actively thinking about.”
Ms Farmer was asked what was emerging as the single greatest issue for the use of AI in healthcare.
“I don’t think I would dare to try to answer that for the whole of AI in healthcare, but I would say something that’s really front of mind is the knowledge and understanding and people have very different levels of awareness about AI in healthcare, all kinds of people, different roles, patients, clinicians, healthcare providers,” she said.
“And I think that probably makes it hard to have a consistent conversation, and because it’s so new and it’s evolving so quickly, I think those are additional challenges.”
The consultation paper acknowledges that the rapid development and increasing presence of AI in health and aged care settings presented “novel challenges that call for thoughtful regulation”.
The authors flagged that regulation could be targeted at different entities in health and aged care to achieve different results. For example, AI regulation could be targeted at:
- Clinicians and how they use AI to perform their jobs.
- Organisations that deploy AI to increase the performance of their health or aged care services.
- Software vendors who create AI models and tools.
- Organisations who collect, use or sell health data which is used in the development of AI tools. This may be government, hospitals, universities or private companies.
- Liability when there is misuse or adverse events when using AI
- AI suggesting treatment pathways formulated to maximise billing or claiming.
“These entities already need to comply with existing rules, for example, our existing laws around privacy and medical negligence will continue to determine how data can be used and the standard of care which must be provided,” the authors wrote.
“In some cases, it may already be clear how existing rules and standards will apply to AI. In other cases, it may be helpful to clarify how these rules will apply to AI, or to create new rules to accommodate novel aspects of AI use.
“A combination of different regulatory tools will likely be needed to adequately support the safe use of AI in health and aged care settings.”
Written responses to the paper will be accepted until Monday 14 October 2024. A third and final webinar will be held on Tuesday 8 October from 12-12.45pm. Registration is required for the webinar.