The rise of AI (artificial intelligence) in medicine is a subject that seemingly everyone has a position on and is talking about.
While the media make hay with eye-catching headlines about AI programs helping to formulate new antibiotic treatments, and with academic papers highlighting Chat GPT’s apparently superior bedside manner to human doctors when answering patients’ questions, the discourse around AI in medicine has a tendency to generate more heat than light.
In some ways, the relationship between AI and healthcare is not a new one, with artificial intelligence having made its first forays into medicine during the last century through developments such as electronic health records and robot-assisted surgery.
With modern-day advances in technology, in particular the rise in the availability of big data, AI is likely to become an increasingly integral part of the planning and delivery of healthcare and the NHS as we move further into the 21st century.
This certainly appears to be the Government’s assessment, with the Department of Health using the 75th anniversary of the health service to announce the launch of ring-fenced funding totalling £21m to which NHS trusts can bid for financial support in assisting with the rollout of AI healthcare tools.
With algorithm-driven AI platforms already providing support to clinicians in areas such as triaging, radiology and imaging and assessing patients’ risk of certain medical conditions, some of those championing artificial intelligence regard it as a potential cure-all for the many challenges and deficiencies blighting the NHS.
Yet with AI representing such a nascent and rapidly advancing area of technology, in medicine and all other areas of society, there are understandably many people with questions and concerns.
A motion brought before this year’s annual representative meeting in Liverpool roundly endorsed by members has committed the BMA to conducting an investigation to determine ‘the potential harms and drawbacks of the current and future use of AI in the delivery or monitoring of health’.
Buckinghamshire GP James Murphy presented the motion to ARM, although he is far from being the only doctor who is cautious and questioning about the implications of AI.
Dr Murphy says his own practice already employs an AI program to assist with patient triaging, but he is acutely aware of the potential risks to safety and patient data posed by poorly regulated or understood AI advances of the future.
It’s going to be about people’s jobs, patient safety, and the future of the NHSDr Murphy
‘AI can be a very powerful tool, it can do a lot of good, but we can’t afford not to have a grasp on it as an issue,’ he says.
‘This is going to be about people’s jobs, it’s going to be about patient safety and it’s going to be about the future of the NHS.’
While his practice’s experience of using an AI platform has been successful in lightening his and his colleagues’ workloads, Dr Murphy feels many doctors lack sufficient knowledge or understanding of AI applications to assess the benefits and risks of a particular platform or app.
He further worries those touting AI as a magic bullet to all the NHS’s problems is a dangerously complacent mindset, pointing out that the benefits of a more effective triage system are potentially squandered if there aren’t enough staff to see and treat the patients.
‘It [AI] can help mitigate some of the issues, it can help to channel things better, and utilise our resources as efficiently as possible, but it is not a panacea,’ explains Dr Murphy.
‘The fact of the matter is, we’re [GPs] massively stretched and if someone is coming to us and saying, “here is a cheap or free solution to help you to manage your workload”, I think this adds into the risk of things [being adopted] and not being properly scrutinised.
‘Perhaps the biggest point about AI is that we need to be sure we understand how the systems are making their decisions. Too often they become a black box where even the creators aren’t absolutely sure how the decisions are being produced.
‘My personal view is that the temptation to run away with this technology without implementing the sorts of internationally recognised safeguards the world is saying are needed to make up for the many, many, many failings of our health system [is] too tempting.’
Dr Murphy’s concerns are ones that many advocates for advancing the use of AI in medicine would recognise and accept as valid.
Hatim Abdulhussein is a practising GP working with NHS England as national clinical lead for AI and digital workforce, and as medical director for Kent Surrey Sussex Academic Health Science Network.
As part of his role nationally, Dr Abdulhussein was instrumental in helping to implement fellowships aimed at training healthcare professionals in digital technologies, one of the recommendations from the 2019 Topol Review which looked at how various technologies, including AI, were likely to affect future healthcare. ‘The way I see it is this technology is going to exist regardless, so it is up to us to assess how we use it,’ says Dr Abdulhussein.
‘The way I would describe AI is that it’s no different to what we already do in terms of evidence-based medicine, but it’s collating data on a much greater scale, and then making computational decisions on that data.’
Citing existing programs statistical-modelling tools such as QRISK and QFracture which aim to assist doctors in predicting the risk of disease in a patient, Dr Abdulhussein says rapid developments in technology could soon see AI take a role in many other areas of healthcare.
‘We’re seeing early innovation projects where AI is used to triage patients and also help to provide advice in what kind of support or treatment they have next. In all of these cases, you still must have the humans in the loop assessing and managing that process,’ he says.
This technology is going to exist regardless, so it is up to us to decide how we use itDr Abdulhussein
‘In areas like stroke, we have seen the impact of being able to make faster decisions on whether to use thrombolysis on patients through the use of AI in imaging. We’re also seeing early pilots of AI being used in the detection of certain cancers as well as in detecting or differentiating normal and abnormal X-rays.
‘You see a growing list of technologies emerging in areas like remote monitoring, predictive and population health, and these will impact primary care because it will help improve services within the community.’
While eager to widen the conversation and increase understanding around AI in healthcare among health professionals, Dr Abdulhussein acknowledges the technology cannot be viewed as a silver bullet to all the existing deficiencies in delivery of healthcare.
He says that to utilise the potential of AI it will be necessary for the Government and NHS to overcome existing technological obstacles such as the long-standing paucity of interoperable IT systems between primary and secondary care.
He further adds that, while current standards on the development and licensing of healthcare devices in the UK as administered by the Medicines and Healthcare Products Regulatory Agency are reliable and robust, he accepts AI representing such a rapidly developing area in healthcare technology, keeping abreast of new platforms and devices, would be crucial.
We’re seeing AI being used to triage patients and help to provide adviceDr Abdulhussein
With the construction of AI algorithms dependent on access to vast amounts of data, and with many new platforms developed by academic and commercial organisations for the NHS, Dr Abdulhussein says having clear guidelines and processes is essential to establish how and what data was used in the development process.
‘If we’re going to allow academic and commercial organisations to access that data, what governance and data sharing agreements should be in place?’ he says.
‘What level of access to data should they get and what can they do with that data? It is really important to understand because we will need to work with these organisations in some way to improve patient care.
‘The potential of how quickly this transforms the way we work means we do need to have the ability to be able to regulate in this area efficiently. And we should work with and support our regulators to look at what these technologies might do and how they might change the way we work and practice.’
One potential solution to the use of data containing information that could identify patients is something known as synthetic data.
This artificially generated ‘fake’ data can be used at scale to train AI algorithms with no risk of compromising patient confidentiality, and with the bonus of eliminating sources of bias such as overrepresentations in sex or ethnicity, that are sometimes present in real data.
Mihaela van der Schaar is professor of machine learning, artificial intelligence and medicine at the University of Cambridge’s Centre for AI in Medicine.
During the early months of the pandemic Prof van der Schaar’s lab was responsible for creating an AI platform known as the Cambridge Adjutorium.
Drawing on anonymised patient data provided by Public Health England, the Adjutorium provided a system which assisted emergency departments in predicting the demand for ICU beds and ventilators and thus allocate resources more effectively, with Prof van der Schaar confident the basis of the platform has wider applications for the NHS.
We need to have the ability to do regulation in this area efficientlyDr Murphy
Like Dr Abdulhussein, Prof van der Schaar believes educating and including doctors about AI is critical to it having a successful effect on medicine.
‘I think a good analogy is like a pilot,’ says Prof van der Schaar.
‘The pilot needs to understand how to pilot the plane, but this does not mean that they’ve studied mechanical engineering, or aeronautical engineering. In a similar way, doctors need to understand enough about the AI to use it for their needs, and to develop it for their needs.’
While welcoming of the Government’s apparent commitment to funding AI in healthcare, Prof van der Schaar says far more important at this stage is taking steps to include clinicians in the designing and commissioning of AI devices in healthcare.
‘That there is money for AI in medicine is fantastic, but I feel there needs to be much more clinician need first, and to have a much more comprehensive discussion about a variety of places where AI could help and come up with a good list of places where AI could help, which is much more clinician first and healthcare system first, rather than AI first,’ she says.
‘I really want to make this transformation of healthcare with AI for the better, not by marginalising clinicians, but empowering clinicians. We [the AI community] need to understand what is needed to develop the right tools, so we need doctors’ guidance.’
With the association’s review into AI in medicine still at a very early stage, many question marks remain over how this technology might affect the future of healthcare.
BMA board of science chair David Strain is adamant, however, that the profession and patients they care for deserve answers, to gain a deeper understanding of how the technology functions and to provide clarity on what implications AI has for issues such as clinical accountability.
‘The use of AI in medicine and the delivery of healthcare is clearly going to be an ever more significant issue in the years to come,’ says Dr Strain.
‘Machine learning, whereby a computer works towards achieving a specific desired outcome based upon parameters set and refined by humans, has already made huge improvements to healthcare through robot-assisted surgery and retinopathy screening in diabetes.
‘It is important, however, that we differentiate this from artificial intelligence programs which reach conclusions based upon vast amounts of data, and through processes that are not always fully understood.
‘Going forward, the BMA will be taking a strong interest in understanding the implications of AI, how AI-based tools are validated and what their use within medicine might mean for doctors and for patients.’