20 June 2018

What we may learn from ‘reviewgate’

KnowCents Technology

HealthEngine’s “reviewgate” is a timely reminder of just how fast technology and society is thrusting us all towards a more patient-centric and transparent world.

Doctors and their patients will need to adapt. So will HealthEngine.

When I first read Fairfax Media’s story on HealthEngine’s apparent systemic tampering with patient reviews, it had the feel of a tabloid beat up. Dr Marcus Tan is a passionate bloke. He is proud, smart and ethical. 

Then I read a series of edits the newspaper had uncovered. Here are just two of them to give you a taste in case you didn’t see the original story.

Original review: I have been attending this practice for years. Every doctor there is great. I saw a new doctor this time and halfway through the appointment he took a personal call on his mobile and was talking about paying his rent and water bill and got out all his credit cards asking what details they needed etc. this went on for about 4mins. Very rude and I felt like I was not a priority. Will not be seeing this dr again!!! Not sure of his name, but he was a small Indian man around 50-60yrs old.

HealthEngine’s edited version: I have been attending this practice for years. Every doctor there is great.

Original review: I will use this practice if I have no other option. Receptionist was lovely but the wait and then the doctor checking text messages and not seeming connected with us was disappointing.

Edited version: The receptionist was lovely

Something went very wrong at HealthEngine down the management line is the best interpretation you could have after reading those edits. 

These edits are misleading at best, and deceptive at their worst. Given the positively oriented scoring system that HealthEngine runs, they could only warp a practice score up when these reviews should be averaging them down.

Fairfax only published 30 examples of what it considered to be dodgy editing. It would have been informative to have the media group assess how many there were overall. But Fairfax did report that 53% of 47,898 total reviews had been altered in some way. 

Dr Tan initially maintained that what HealthEngine was doing was “not misleading”. But he quickly removed all the reviews and published what could be described as a “grovelling” apology.  But if you read his notice to customers,  it isn’t actually an apology for misleading anyone, it is an apology for the inconvenience caused by the removal of the reviews. 

Dr Tan spends most of his words in his notice trying to explain what he was attempting to do,  I guess as some sort of defence of a very bad mistake in his business processes.

Dr Tan told The Medical Republic he wouldn’t answer questions for this story because he was undertaking a very thorough review of what had happened and needed to understand it all better before he spoke publicly.

He is missing a trick. It’s easy to see what has happened, and whether what we got was his intention or not, he needs to own up to what is a very severe stuff-up by his company. It has ended up as a breach of trust for the practices and patients (no matter how small).

Dr Tan needs to tell everyone the full extent of what has been happening and how he intends to fix it.

That’s not easy, but it is PR 101 in an age where consumers have enormous ability to uncover wrongdoing, to make that transparent to many more consumers, and then to amplify their anger and punishment if the wrongdoers don’t respond quickly and with the utmost authenticity, integrity and transparency. It’s actually very harsh. But it is what it is. 

Having given Dr Tan and HealthEngine that serve, let’s return to what he was, I think, actually trying to achieve: some sort of  productive and positive compromise in what is a hugely complex, fast evolving and contentious world of patients rating medical services, including their doctors.

It is important to note HealthEngine does not try to actually rate doctors, but it does try to positively review practices. 

Dr Tan’s compromise goes like this:

Under AHPRA regulations, we are obliged to moderate positive feedback shared by patients to ensure that it is compliant. This often means removing clinical information, or comments that may identify the patient or a specific practitioner.

Negative feedback is not published but rather passed on confidentially and directly to the clinic completely unmoderated to help health practices improve moving forward.

We email all patients about their reviews being published and alert them to having possibly been moderated according to our guidelines. 

HealthEngine only publishes positive reviews, and does not publish a rating if a practice falls below the 80% mark. According to the company only 8% of practices don’t actually achieve a result over 80%.

Dr Tan told The Medical Republic in a previous interview about the service and its positive-leaning philosophy that: “HealthEngine’s patient satisfaction level and reviews platform is designed to be positive and aspirational. We want to reward practices that demonstrate quality of service in healthcare.”

He also said that displaying results below 80% did not do anyone any good, and practices that rated below 80% got sent the information so they could act on it to get back over the mark and have their reviews displayed.

Given that HealthEngine make its money from practices which either use their booking engine, pay for their directory leads for appointments, or both, you might question a review system that is just positive. Isn’t there some conflict with this?

Well, there is a little conflict for sure, and this is why many might look sceptically at how HealthEngine came to be editing  reviews in such a manner. But if you look behind Dr Tan’s thinking and peer-reviewed research on patient satisfaction mechanisms for doctors, his system, in theory, isn’t such a bad one.

To start Dr Tan’s assertion that very few reviews of the practices are actually negative (he says that only 6% of patients leave “very negative” or “somewhat negative” feedback) seems to be supported by more rigorous research into patient satisfaction with GPs. It has been that way for a very long time. Papers as far back as 1964 which look at GPs and patient satisfaction are relatively consistent in their findings, and the trends seem to cross international boundaries. Of that small percentage who criticise, most were complaining that their GP was overworked and therefore not listening to them. Sound familiar?

Then Dr Tan maintains that in his system all the unedited bad feedback is sent to the practices in a timely manner so they can take that feedback on board if they so desire.

 If that’s the case, then I can’t help but think what were the doctors thinking when they read the published reviews which were edited? My guess is most were so busy they never did.  Regardless, Dr Tan’s system – sans the current editing issue – might be the best of bad series of systems out there. 

One thing that sets HealthEngine apart from nearly all the other public rating systems is that it qualifies that each patient has actually seen a doctor at the practice that is being rated.

 HealthEngine can do this because it is the booking system, so it knows the patient visited that practice. And it takes that relationship and leverages it for the review. 

On top of that, HealthEngine is GP-  and healthcare-centric business. It is not run by a foreign corporation and management consultants. It is run by doctors, who understand the issues doctors face, especially around patient interaction.

HealthEngine should have been using this to its advantage in its rating system and something has broken down. But it will fix it.

All other rating systems can’t qualify the reviewer. And often reviews will attempt to remain anonymous. Such systems, which included Google, Yelp, productreview.com.au, and even the dedicated GP-rating systems in Australia, can’t verify if a reviewer is a real patient or not. This is a primary failing of these systems.

But any doctor-rating system is fraught with issues. In the US, doctor ratings still aren’t regarded as credible because of the complexity around attempting to rate medical services.

The US has several services that use performance data to attempt to bring credibility to their results, but as we are seeing in Australia, attempting to genuinely rank the performance of one doctor against another, even with detailed and hospital-based outcomes data, is very difficult.

So should we be even trying to rate our GP practices or our doctors? 

In this day and age, it doesn’t feel like doctors can escape some form of consumer oversight, whether formal ratings systems develop over time or not. In the end, doctors will be rated by patients via social networks.

So in some way the issue needs to be embraced by the profession otherwise it will exist outside reasonable moderation by professionals who do understand the complexities of the job.

The New England Journal of Medicine recently published a review on the subject (N Engl J Med 2017; 376:197-199) that provides a good summary of where things are at, and how doctors should be viewing online reviews. 

Ironically for HealthEngine, the paper is titled: Transparency and Trust: Online Patient Reviews of Physicians. The authors conclude that online reviews are an important part of an evolving patient-centric world that doctors need to find a way to embrace. They don’t say it is easy. But they do give good reasons for doing it.

Author Vivian Lee writes: “Publicly available reviews can help address information asymmetry in the healthcare market and increase patients’ confidence in their own decisions. Collectively, by making clear their preference for higher-performing systems, patients can become a market force driving quality and value in healthcare.”

Lee goes on to say that a correctly run review system provides valuable feedback for doctors which they would not normally get during a visit, and that “healthcare systems and physicians who voluntarily share patient-review data visibly foster a spirit of trust with patients and the community”.

Combined with the rapidly expanding power of social media and digital access to information for consumers, it is very hard to argue with this logic.

The irony of “reviewgate” is that HealthEngine’s review system was conceived with the right logic and intention, and remains – given Dr Tan oversees a thorough cleanout of the edited reviews in the system and how they came about – the best of  bad choice.

Another irony might be that HealthEngine, with all its recovered original reviews, holds, probably by far, the best database of consumer sentiment around GPs.

If and when this scandal is sorted out, perhaps Dr Tan can put some of the data to good use by opening it up more to both the GP community and to patients to reveal how better the two groups can interact. It’s all about transparency and trust. 

HealthEngine is in a position to restore that trust over time and do good with its data – not evil. Perhaps then some good can come out of this sorry saga.