Researchers from the DHCRC, UTS and the DoHAC have joined forces to adapt the OECD AI classification framework to an Australian healthcare context.
The Digital Health Cooperative Research Centre, the DoHAC and specialist AI research teams have partnered to develop a first-of-its-kind digital tool for classifying and assessing the risk of AI applications within the Australian healthcare sector.
The tool, designed to be used as a “self-serve advisory and benchmarking” apparatus for healthcare organisations, AI developers and policymakers nationwide, will be modelled on the OECD’s AI classification framework, released in early 2022, and since endorsed by 46 countries, including Australia.
Dimensions covered by the OECD AI classification framework include the economic, social and environmental context implications for users and stakeholders, the format, scale and appropriateness of various AI applications and data collection, implications for users and stakeholders and the duties of the AI system.
Once development of the tool is underway, researchers will also look to consult with AI developers and users familiar with healthcare-based AI solutions to maintain consistency in the adaptation of the OECD framework to fit the context of Australian healthcare and ensure the tool’s compliance with the latest regulatory and policy interventions.
“As government looks to build community trust and promote AI adoption, we need to provide guidance on how to use AI safely and responsibly,” the DoHAC’s assistant secretary of digital and service design, Sam Peascod said.
“Having a tool that can assist in classifying and performing a risk assessment of AI technologies will support the adoption of AI solutions by health care organisations and health care providers, ultimately leading to better health outcomes for consumers.”
Related
Researchers from UTS Rapido and UTS Human Technology Institute will strive to have a prototype of the online tool ready for evaluation by mid-next year.
“To complement the work of government and industry to define AI ethics principles, develop AI risk assessments, and provide guardrails for the safe and responsible use of AI, there needs to be a standardised approach to classifying the varied types of AI systems in use,” said DHCRC CEO Annette Schmiede.
“The challenge is building clear and consistent guidance and tools, ensuring these are effective for the diverse range of audiences and AI solutions across healthcare including developers, health care providers and consumers.”