Despite the hype, AI could just be a solution in search of a problem.
The pandemic saw remarkable advances in health technology, as governments and institutions fast tracked funding and decision making to identify, control, treat and prevent covid.
Hundreds of these projects used artificial intelligence (AI) and machine learning (ML) to solve the myriad problems that emerged during covid. Many of them were tasked with uncovering patterns in radiology images, patient demographics, symptoms and histories, to inform diagnoses and predict patient outcomes.
But despite the hype, evidence has emerged suggesting that almost all of these projects were flawed. In the face of a pandemic, AI often turned out to be a solution in search of a problem.
Thatâs not to say we should write AI off, says Dr Stefan Hajkowocz, Principal Scientist at CSIROâs Data61 and author of best-selling book Global Megatrends, which predicts that AI is one of the digital technologies that will become critical to most industries in coming decades, along with data science, computer vision and natural language processing.
While applications of AI to clinical support â particularly in the early days of covid â had unreliable outcomes, AI is making huge inroads into scientific endeavour, he says.
Stanford Universityâs AI Index Report shows that AI-related publications on the open-access scholarly archive arXiv.org have increased sixfold since 2015, and AI papers represented 3.8% of all peer-reviewed scientific publications worldwide in 2019, up from 1.3% in 2011.
AIâs time is now, says Hajkowicz. In the past, sudden leaps in research activity and funding for artificial intelligence (most notably in 1974 and 1987) has been followed by an âAI Winterâ where these bubbles have collapsed and it has taken years for the field to recover.
But he believes things have fundamentally changed: AI is here to stay. Data61 has developed Australiaâs Artificial Intelligence Roadmap and hosts the newly-launched National Artificial Intelligence Centre.
âAI and machine learning are deeply embedded, not just in computer science but across all major fields of scientific research, and thereâs also across-the-board use of AI in industry,â he says. âAI is helping us to solve problems that were previously unsolvable.â
Where did it all go wrong?
So why the bad news for AI when it comes to covid? In the UK, the Centre for Data Ethics and Innovation (CDEI) found that, outside its utility in advancing research into tests and vaccines, AI did not play a major role in addressing covid.
A CDEI survey of over 12,000 people found that instead, conventional data analysis of existing datasets provided the basis for much of the pandemic response, with these datasets made more useful by improved data sharing agreements.
Similarly, a report issued mid-2021 by leading UK AI research group, the Alan Turing Institute, found that while pandemic responses saw increased data sharing, inter-disciplinary collaboration and new data repository initiatives, covid AI tools were beset by issues of incomplete and biased data collections.
Covid infection rates, for example, were likely significantly undercounted. In some areas, medical data reflected only more affluent communities with ready access to paid sick leave, tests and medical procedures.
These UK experiences were reflected in wider studies, such as a review in Nature Machine Intelligence, which analysed 62 different machine learning tools that used clinical images to detect and predict the progress of covid â and found 55 of these had a high risk of bias due to incomplete or ambiguous data.
Another international review published in the BMJ, which assessed predictive models for covid diagnosis and prognosis constructed during the first year of the pandemic, found that all but two of the 232 prediction models had a high risk of bias.
âUnreliable predictions could cause more harm than benefit in guiding clinical decisions,â the authors noted.
Outbreak alarm tools effective with AI
However, Australian academics Ian Scott and Enrico Coiera, writing in the MJA in October 2020, noted that the first alerts warning of the emergence of covid came via two AI-assisted early-warning models â on 30 Dec 2019 from the HealthMap disease outbreak monitoring system based in Boston, USA and on 31 Dec 2019 via Canadaâs BlueDot infectious disease risk platform.
âAI-assisted analysis and modelling have also helped reconstruct the progression of an outbreak, elucidate transmission pathways, identify and trace contacts, and determine real or expected impacts of various public health control measures,â Scott and Coiera wrote.
The pair point out that the key lies in the data used to train these systems. So, AI outbreak monitoring tools which used long-standing, well curated datasets were more effective than those tasked on prognosis and clinical decision support for a little-known new disease, where datasets were hastily assembled during a fast-moving situation.
âThose trained on limited and unrepresentative data are susceptible to overfitting and can perform poorly on real-world datasets,â they wrote.
Better training for better solutions
âAlgorithms for AI can be trained using either supervised methods which require human intervention, or unsupervised models which essentially create their own patterns from a large dataset,â explains Sammi Bhatia, a finalist this year in Australiaâs Women in AI Awards, and COO and co-founder of Sydney AI platform Medius Health.
âYou need to start with the objective that you’re trying to resolve, and then pick the right technology,â she says.
Medius has developed a data-rich medical AI platform which includes the product Quro, a free consumer-facing AI tool clinical decision support trained on medical guidelines and journals.
Bhatia says that while AI clinical tools faced challenges during the pandemic, there were a myriad of AI applications that were very successful in the management of covid.
âNon-pharmaceutical interventions or NPIâs for covid were well served by AI, from modelling the covid outbreak, detecting community hotspots, understanding the impact of lockdowns, and analysing the effectiveness of other measures that are taking place.â
Big Data behind the big AI push can lead to big problems
CSIROâs Dr Stefan Hajkowocz agrees that the drivers behind the current uptick in AI include the emergence of large-scale, open databases providing widespread access to huge volumes of information. These include genome repositories like the Influenza Research Database and GISAID, established in 2008 to share avian influenza (bird flu) data.
GISAID gave researchers around the world free access to the SARS-COV2 genome sequence in January 2020, and remains a key focus for the genomic epidemiology and data sharing during the covid pandemic.
âWhile big data can provide high volumes of information which can be used to train AI models, the downside is that it also has a lot of noise and error,â says Hajkowicz.
He says that thereâs now a strong preference for AI training to be done with âsmall dataâ â well-curated, better managed datasets that are used to develop specialised, targeted applications.
The down side
There are real concerns globally about the risks of using AI in health, such as the unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment. Systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.
To address these concerns, earlier this year, the WHO issued a Guidance report on Ethics & Governance of Artificial Intelligence for Health.
The European Unionâs proposed Artificial Intelligence (AI) Act is a legal framework for trustworthy application of AI and aims to serve as a global standard in regulating this technology.
âThe key is having AI that is transparent and explainable,â says Hajkowicz. âAs AI becomes more stable and itâs demystified, it basically becomes more boring â and that is when it becomes useful, the risk is lowered and people can trust it.â