AI won’t replace you, but it may help you

5 minute read


The first generation of medical artificial intelligence systems are already rolling out to clinics. What are the pros and cons?


In the next few years, you will probably have your first interaction with a medical artificial intelligence (AI) system.

The same technology that powers self-driving cars, voice assistants in the home, and self-tagging photo galleries is making rapid progress in the field of health care, and the first medical AI systems are already rolling out to clinics.

Thinking now about the interactions we will have with medical AI, the benefits of the technology, and the challenges we might face will prepare you well for your first experience with a non-human health care worker.

The technology behind these advances is a branch of computer science called deep learning, an elegant process that learns from examples to understand complex forms of data. Unlike previous generations of AI, these systems are able to perceive the world much like humans do, through sight and sound and the written word.

While most people take these skills for granted, they actually play a major role in human expertise in topics like medicine. Since deep learning grants computers these abilities, many medical tasks are now being solved by artificial intelligence.

In the last 12 months, researchers have revealed computer systems that can diagnose diabetic eye disease, skin cancer, and arrhythmias at least as well as human doctors. These examples illustrate three ways patients will interact with medical AI in the future.

The first of these three ways is the most traditional, and will occur where specialised equipment is needed to make a diagnosis. You will make an appointment for a test, go to the clinic and receive a report. While the report will be written by a computer, the patient experience will be unchanged.

Google’s diabetic eye disease AI is an example of this approach. It was trained to recognise the leaky, fragile blood vessels that occur at the back of the eye in poorly controlled diabetes, and the AI is now working with real patients in several Indian hospitals.

The second way of interacting with medical AI will be the most radical, because many diagnostic tasks don’t need any special equipment at all. The Stanford team that created a skin cancer detector as accurate as dermatologists is already working on a smartphone app.

Before long, people will be able to take their own skin lesion selfies and have their blemishes analysed on the spot. This AI is leading the race to become the first app that can reliably assess your health without a human doctor involved.

The third method of interaction is somewhere in between. While detecting heart rhythms requires an electrocardiogram (ECG), these sensors can be incorporated into cheap wearable technology and connected to a smartphone. A patient could wear a monitor daily, record every heartbeat, and only occasionally see their doctor to review their results. If something serious occurs and the rhythm changes suddenly, the patient and their doctor could be immediately notified.

Many groups are working to bring medical wearables to the clinics now.

What are the benefits?

These systems are incredibly cheap to run, costing a fraction of a cent per diagnosis. They have no waiting lists. They never get tired or sick or need to sleep. They can be accessed anywhere with an internet connection.

Medical artificial intelligence could lead to accessible, affordable health care for everyone.

What are the downsides?

The largest concern is probably unrealistic expectations, created by the hype surrounding the technology. The enormous volume of carefully and expensively curated data required to train a system that can do everything a doctor can is currently far beyond our reach. Instead, we will see narrow systems performing individual tasks for the foreseeable future. To counter these inflated expectations, we need to promote informed voices in these discussions.

The privacy of our medical data will also be a challenge. Not only will many of these systems run in the cloud, but some forms of useful medical data are inherently identifiable. You can’t blur out a patient’s face, for instance, if the system analyses the face for signs of disease. High profile data breaches will be inevitable and will harm confidence in the technology.

The other major issue is the problem of accountability. Who is responsible for a medical error if no doctor is involved in the diagnosis, and we can’t even tell why the system got it wrong? Who do we blame when a doctor accepts the wrong recommendation of an AI? Patient advocates, doctors, governments, and insurance companies are wrestling with this issue, but we don’t have good answers yet.

Medical AI is coming, and you will experience it soon. Most of it will be invisible, working behind the scenes to make your care cheaper and more effective. Some of it will be in your hands, assessing your health at the touch of a button. The best thing to do right now is think about the various challenges, and be prepared for your first appointment.

Luke Oakden-Rayner is a radiologist and PhD candidate at University of Adelaide

Disclosure statement: Luke Oakden-Rayner does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

This article was originally published on The Conversation. Read the original article

End of content

No more pages to load

Log In Register ×