Not without a morale-sapping level of surveillance, tempting as it is to seek automated solutions to human failures.
The crimes of convicted UK serial killer Lucy Letby shocked the world, devastated the families of her victims and undermined confidence in a healthcare bureaucracy that allowed her to continue killing and harming babies, long after she should have been detected and stopped.
The failings at the Countess of Chester Hospital, in the north of England, where she worked, were undoubtedly human.
Central to the tragedy was a refusal by senior decision makers to believe that a woman trained to care for grievously ill, newborn babies was capable of murdering and injuring them.
Of course, there is no completely foolproof way to ensure that disturbed and deranged people can never inveigle themselves into situations where they can perpetrate harm. Human nature will always ensure that the most deceitful perpetrator has an advantage over even the most sceptical gatekeeper.
Letby exploited not only the naivetĂŠ of a credulous employer, but also the hospitalâs craven refusal to countenance the possibility that it had a serial killer on its staff, through fear of the reputational damage that would ensue.
Healthcare professionals who kill patients are thankfully extremely rare but this case has implications for hospitals and health authorities throughout the world.
The forthcoming public inquiry will consider all of these issues and it will, in time-honoured fashion, publish a series of recommendations aimed at mitigating against a repeat of the Letby case.
Doubtless, many of the proposed changes will concern tightening procedures around recruitment, management, and clinical oversight â but, given the limitations of altering human nature, how effective can they truly be?
Could medical technology have been better deployed to detect what Letby was doing? And could advances in artificial intelligence and diagnostics hold the key to preventing a repeat of her grim campaign of carnage?
One of the first questions the inquiry will consider is Letbyâs character. Was she a one-off, or was there anything in her past, or in the patterns of her behaviour, to indicate that she was capable of murder?
While the crimes and victims of Letby are rare, it is doubtful that a form of psychological profiling could have identified any risks in employing her, according to Dr Marissa Harrison, a professor of psychology at Penn State Harrisburg.
In a profile of âtypical female serial killers compiled by Harrison and her team for The Journal of Forensic Psychiatry & Psychology in 2015, nearly 40% were nurses, nursesâ aides, or other healthcare workers.
Their analysis showed that a female serial killer was likely to be white, Christian, average looking or attractive, and in her twenties or thirties â very similar to Letby but also to many of her colleagues.
So, while profiling may have identified Letby as a potential serial killer, it would also have identified many other nurses or medical professionals who would never dream of harming another human being.
But could advances in medical technology and diagnostics offer more effective possibilities to identify signs of malicious intent among healthcare staff, ensuring early intervention to prevent patient harm?
In the context of preventing hospital scandals, these could play a pivotal role in identifying potential signs of harm or negligence. Automated systems can already continuously monitor patient vitals, medication administration, and treatment responses, alerting clinicians to deviations from expected patterns.
Letby killed and harmed babies by injecting them with air and insulin and by overfeeding them. Dr Dewi Evans, a clinical expert who provided medical evidence that resulted in Letbyâs conviction for seven murders and six attempted murders, reviewed clinical notes of more than 30 babies who had either died or collapsed between January 2015 and July 2016.
In most cases, the cause of death was identified as natural explainable, for example after suffering a haemorrhage, infection or because of a congenital problem.
However, in the cases of 15 babies, their collapse was unexpected and could not be explained as natural.
Several of the babies had evidence of an air embolism, as if someone had injected air into their circulation directly. Others displayed signs of having had milk, or milk and air, injected directly into their stomachs, had high levels of insulin in their systems or they showed signs of having suffered direct trauma, with traces of blood found around their mouths or at the back of their throat.
Dr Evans said staff were only alerted to the insulin poisoning, after re-examining a set of twins who had been harmed in other ways.
He said: âThis was a complete shock to me and a complete surprise â but it was quite important because, at last, one could find some kind of smoking gun.
âIn other words, one could now show that someone was causing harm to babies, whereas with the injection of air, unless someone sees you doing it, it would have been difficult to prove that these babies were placed in harmâs way.â
It is clear that early detection of irregularities in medication administration, patient responses, or mortality rates should have triggered investigations and interventions earlier, potentially identifying anomalies and prompting timely actions to prevent further harm.
Additionally, an earlier close examination of electronic health records could have provided faster tracking of patient care, providing an audit trail for every action taken by medical professionals.
Machine learning, in such cases, will become ever more valuable because algorithms can analyse vast datasets to detect subtle patterns that might indicate foul play or negligence. This proactive approach enhances patient safety and fosters accountability within healthcare systems.
However, while the prospect of using medical technology to identify malicious intent is promising, it also presents challenges.
Behaviour analysis and predictive modelling can flag unusual activities, sudden changes in performance, or unexplained patient outcomes that might warrant investigation. For instance, a sudden increase in medication errors, unauthorised access to sensitive patient information, or frequent deviations from established protocols could signal potential wrongdoing.
Nevertheless, technology-driven solutions are not foolproof. Context matters, and deviations from norms may arise due to genuine reasons such as workload, fatigue, or personal issues.
Technology should serve as a tool to raise red flags, rather than as irrefutable evidence of malicious intent. Human oversight and judgment remain crucial in interpreting data and making informed decisions.
The quest to prevent future hospital scandals through technology, intersects with fundamental issues of trust, privacy, and ethical considerations.
Surveillance and continuous monitoring of healthcare professionals might be perceived as an infringement of their privacy and autonomy, raising concerns about creating a culture of distrust within medical institutions.
The question arises: can we ensure patient safety without sacrificing the rights and dignity of innocent medical professionals?
Implementing stringent monitoring measures could indeed erode trust and compromise the morale of dedicated hospital workers. Healthcare professionals must feel empowered to make decisions based on their expertise and judgment, free from the constant fear of surveillance. Striking the right balance between oversight and professional autonomy is imperative.
To single out individuals with malicious intent, there is a potential trade-off between patient safety and the freedom of innocent staff to perform their duties effectively.
Imposing strict monitoring and surveillance could create a hostile work environment, hindering collaboration and the delivery of quality care. Furthermore, an excessive focus on preventing scandals might divert resources from addressing systemic issues such as understaffing, inadequate training, and work-related stress that can contribute to errors and patient harm.
While technology can aid in identifying anomalies and deviations from the norm, it should be accompanied by a comprehensive approach that emphasises education, training, and fostering a culture of open communication.
Rather than relying solely on surveillance, healthcare institutions should invest in creating an environment where practitioners feel comfortable reporting concerns and mistakes without fear of retribution.
Ivor Campbell is chief executive of Snedden Campbell, a UK specialist recruitment consultant for the medical technology industry.