Amazon launches AI-based medical scribe service

4 minute read


The release of AWS’ HealthScribe service comes at the same time as the AMA warns of unregulated use of AI in healthcare in Australia.


Amazon has made its latest foray into healthcare with the launch of HealthScribe, which uses AI to automatically generate clinical notes.

According to the announcement, made at the Amazon Web Services summit this week, the program can also identify key details in transcripts and create summaries ready to upload into electronic health records.

The service is currently being previewed for application in general medicine and orthopaedics, with the potential to expand to other specialities if client feedback is positive, an AWS spokesperson said during the announcement.

Users pay a fixed amount per second of audio per month to use the service, with any consultation audio expunged from Amazon Web Services’ systems once the service is delivered, according to the HealthScribe website.

Users can also choose the region in which their content is stored, with content not being moved or replicated outside of this region except as the user agrees, according to AWS’ information on data privacy.

While the service is currently only available in the US, a similar app was launched by two Australian GPs earlier this week.

Consultnote.ai, developed by Dr Chris Irwin and Dr Umair Masood, uses an OpenAI large language model trained on Australian-specific data to turn consultation recordings into relevant medical notes.

Data entered in the app for transcription or note-taking purposes is temporarily cached in overseas servers for processing, after which it is returned to the user’s device or server, with the processed data not stored on the app or the OpenAI servers, according to the service’s website.

There has been a rise in the number of Australian clinicians employing generative AI services to simplify medical notetaking, including doctors in Perth using software such as chatGPT to write notes that were uploaded to patient record systems.

The AMA has called for stronger regulation of how AI is used in healthcare to protect patients and health professionals alike.

In its submission to the Department of Industry, Science and Resources’ discussion paper on AI use in Australia, Supporting responsible AI, the nation’s peak medical body called for a common set of legislative principles to establish a compliance basis for all individuals involved in the use of AI.

These principles should ensure:

  • safety and quality of care provided to patients;
  • patient data privacy and protection;
  • appropriate application of medical ethics;
  • ensuring equity of access and equity of outcomes through elimination of bias;
  • transparency in how algorithms used by AI and ADM tools are developed and applied; and,
  • that the final decision on treatment should always rest with the patient and the medical professional, while at the same time recognising the instances where responsibility will have to be shared between the AI (manufacturers), the medical professionals and service providers (hospitals or medical practices).

Regulating AI use in healthcare requires a separate discussion process to establish an industry-specific governance strategy to ensure the protection of patient safety, as well as patient and practitioner data privacy, according to the submission.

“We need to address the AI regulation gap in Australia, but especially in healthcare where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy,” AMA President Steve Robson told media.

“There are key health principles that need to be introduced into AI for example ensuring patients and practitioners consent to the episode of care and/or their personal data being used for machine learning.

“There has been a lot of good work done by the European Union, Canada and other countries around privacy, data ownership and governance that we can learn from and adapt to the Australian and healthcare contexts, and we need to examine this through the healthcare lens, so we get it right for the future wellbeing of our patients.”

According to former Chief Medical Advisor to the US President, Dr Anthony Fauci, while clinicians and scientists are right to be concerned about the dangers of artificial intelligence, with the right control and governance AI applications could transform how scientists are able to respond to pandemics and pre-empt future covid variants.

“I think if you look at it under a controlled situation, there are many, many advantages for artificial intelligence in every aspect of medicine and health, from reading x-rays to skin biopsies to … responding to the next pandemic,” he told the Sydney Morning Herald earlier this week.

End of content

No more pages to load

Log In Register ×