‘Curiosity and caution’: advice for GP supervisors

4 minute read


GPSA has released a position statement on the use of AI in general practice training.


General Practice Supervisors Australia is the latest peak body to release a position statement on the use of artificial intelligence for general practice training. 

The statement reiterates its previous position that outlined the potential risks of AI if it was not moderated correctly in general practice. 

The position statement also provides core recommendations to ensure safe and ethical practice whilst still benefiting from the use of AI. 

Perth GP Dr David Adam told The Medical Republic that there was still much to be understood about the use of AI in general practice. 

“Doctors in training are always keen to use new technologies that they think are going to improve their workflow and improve the care of their patients,” he said. 

“I think my concern is about what we can actually tell people with any certainty, and I think that’s pretty minimal, unfortunately,” said Dr Adam. 

One of the main uncertainties lies around the large of a role the technology would have in healthcare in the future. 

“My worry about tools like AI diagnostics and so on is that the seem to be a way to do the job that a GP can do for cheaper,” said Dr Adam. 

“My worry is that these will be seen as an easy answer to communities where GP access is not easily available, and I’m worried that we will accept an inferior option just because it’s cheaper for a group of people who actually need high quality care, just as much as the people who can afford it.” 

GPSA Chair Dr Srishti Dutta said the document provided key recommendations to ensure best practice, patient safety, and the professional development of GP and RG trainees. 

“Today’s medical students and junior doctors are already using AI in various ways, and it is crucial that GP supervisors understand its potential applications, risks and regulatory requirements,” said Dr Dutta. 

“By engaging with AI responsibly, supervisors can enhance the learning experience while maintaining high standards of patient care.” 

The statement offers five tips for “beginning your supervision journey in AI”, including: 

  1. Be curious – develop an awareness of both the potential benefits of AI to healthcare and the potential perils, referring to trusted sources of open-access AI educational resources. 
  1. Inform yourself and your registrar about the regulatory requirements for patient confidentiality and privacy, informed consent, and safety relevant to your jurisdiction. 
  1. Have a conversation: find out which AI applications your trainee uses, for example which aspects of their learning and how; and explore the processes they are using to ensure all regulatory requirements are being met.  
  1. Teach best practice clinical notetaking – compare AI scribes with gold standards in clinician notetaking and explore how AI can complement, but not replace, the clinical documentation process.  
  1. Be cautious about when a trainee can safely use AI and how to assess the best tool and when the trainee is safe to use it in their journey to Fellowship.  

Both ACRRM and the RACGP have come out with similar statements emphasising that AI’s benefits are still considered as a potential of the technology, and that regulation would need to occur to ensure correct usage.

Thus far, the primary usage of AI in healthcare has been in large language model type systems which according to Dr Adam do not provide enough reasoning for their decision-making processes. 

Overall, the situation with AI in healthcare and training especially, still retains all the same concerns with AI industry-wide regarding its government regulation and decision-making authority. 

End of content

No more pages to load

Log In Register ×