AI more help than hindrance – within limits

7 minute read


Using a scribe keeps us running on time and shaves hours of unpaid admin off our day. But patients need reassurance.


I recently attended a workshop provided by my MDO on AI in healthcare, its advantages and pitfalls.  

It was an eye-opening workshop with a lively discussion.  

In recent months I’ve heard of the use of artificial intelligence in healthcare, specifically using Lyrebird, Heidi Health, and even ChatGPT to help clinicians, especially GPs, run better on time.  

The take-home, based on US experience to date, was as follows; 

  • The software has to stay within Australian privacy laws, so ChatGPT is out;  
  • It can help as a scribe – but beware, as it tends to insert its own opinions if the clinician isn’t experienced or firm enough and it may be hard for a junior doctor to know the difference;  
  • It can make up “guidelines” and “citations” as some have discovered, in court, using experiences that don’t exist anywhere to embellish papers;  
  • It is mostly good at pattern recognition and will likely be helpful for clinicians involved in this area of health e.g. radiology, pathology, even end-of-life care and when to discuss palliation based on trends seen;  
  • Patients could access their reports and plug them into ChatGPT, which may then read illnesses into them that aren’t there, creating confusion and generating complaints (based on a real case in the US); 
  • Patients may look at ChatGPT as being the equivalent of a Google search (it’s not, it’s more akin to predictive text based on what we most commonly look for) and end up getting incorrect information.  

Ultimately the consensus for now is that AI still needs an experienced human to oversee the final decisions and recommendations.  

Related to this, other questions arose during the workshop:

  1. What happens to the skills acquired through doing and repetition if junior doctors are (expected to be) overly reliant on AI from an early stage of their careers and don’t know yet to distinguish between real guidelines and made-up ones?  
  2. Over time, may we defer to AI more and more, especially if it is suggesting treatment plans, prescriptions etc. as an easier way to work, with loss of skill?  
  3. What about consent? The patient may not understand the difference between their recording a consultation (with or without our consent) vs our “recording” a consultation ostensibly to help accurate note taking?  

It was a robust and lively session that lasted around 90 minutes with doctors from various specialities (surgeons, anaesthetists, GPs and even an intern) weighing in with their opinions and experiences if they’d begun using an AI scribe already.  

I’ve since added a free version of the AI scribe, Heidi Health, to my own clinic, initially to try it out and then to incorporate it into my regular clinical practice, given my usual appointments are 30-50 minutes with often a lot of ground to cover.  

To get here, my staff and I trialled the AI scribe first; my receptionist pretended to be a patient and we had a “consult” with her outlining a host of concerns, that lasted ~60 seconds. There followed the usual history taking, then a brief examination, a treatment plan and next steps.  

All up, we were “in consult” for around 20 minutes. Within 30 seconds of stopping the transcribing Heidi had outlined the key issues for us, without any identifying details, and summarised it all for me to edit in the patient file and delete from Heidi.  

What would have otherwise taken me up to five minutes to type up took instead two minutes to edit and then save as the final copy.  

In the past two weeks, after trying it out a few times in dummy consultations, we’ve begun implementing it in clinic for real.  

Our software reminds patients of upcoming appointments via email and SMS. The emails now include a link to the consent form for the use of an AI scribe.  

When I see patients who have signed the consent, I confirm verbally that they intended to do so. If they haven’t signed, I confirm that and ask whether they have reservations I can address. Every one of them consented once we’d had a brief explainer in person.   

I have even offered to show a patient what the transcript looks like at the end of their consultation to reassure them there was no breach of confidentiality; that the transcript is essentially accurate; that I’ve cut and paste the notes and deleted the transcript; and that it has saved me time.  

All of which meant, for the patient, a couple more minutes in consultation with me AND less waiting time for them outside if I was running on time.  

So in the interest of helping myself and others, I’ve begun talking about it more on my social media pages; I even did a reel using one of my patients’ deidentified transcript with her consent and noting how the chit chat and “filler” conversation was left out entirely, keeping only the medically relevant parts.  

Because so many of my patients follow me on social media it’s become an effective way to educate them before their next appointment. It has helped allay their fears that the consultation is being recorded (it’s not) or that the transcript is inaccurate (it’s usually not) or that their privacy is being breached (it’s not).  

As a result, the vast majority of patients, new and regulars, have gone ahead and provided consent, both verbal and in writing, for the use of an AI scribe in consultations going forward, making my life significantly easier and saving me at least two hours each day of unpaid “pyjama time”, aka admin and catchup work.  

Why does all this matter?  

With the rise in complexity comes the need to squeeze more into every consultation as safely as possible. An AI scribe can help with that.  

With the drop in bulk billing comes heightened expectations of what patients are paying for, including that we run on time and give them all the time they are paying for.

Peers who have been using this a while longer than I have say they have been able to run on time, or close to it, for the first time in years; that it’s making their lives easier; that they don’t know what they’d do without it.  

Within limits I would concur; already I cannot imagine the days when I used to see patients every 15 minutes and jot down three or four key bullet points before moving on to the next patient, only to return at the end of the day or during lunch to try and flesh the details out: a minefield if it ever led to a complaint.  

I believe most of us will welcome this as long as we are experienced enough to recognise that an AI scribe is just that: an aide memoire, not a replacement for clinical acumen and skill that can only be achieved through years of repetition that makes us efficient with our time and their diagnosis and treatment planning.  

Dr Imaan Joshi is a Sydney GP; she tweets @imaanjoshi.

End of content

No more pages to load

Log In Register ×