Having begun a course on the Potential of AI in medicine (please see the ‘Online Courses’ page) this week, I thought that it would be apt to have a look at the issues and limitations surrounding the development of AI in medicine.
AI (artificial intelligence) and ML (machine learning) is currently providing untapped and rapidly growing sources of data for patient benefit, including the potential of improving diagnostic accuracy, more reliably predicting prognosis, targeting treatments, and increasing the operational efficiency of health systems. But, surrounding these developments is a large amount of concern as, at the moment, they don’t have defined guidelines and rarely undergo the same degree of scrutiny as other medical interventions, such as those in pharmacology. Current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field.
So, what regulation is their currently on health related algorithms? Firstly, developers should ensure that their algorithm complies with the Medical Device Regulation, which until 2010 did not regulate independent software products. This covers any product which claims to have a medical nature, including providing diagnostic information, making recommendations for treatment, or providing risk predictions of disease. Next they must consult the Medicines and Healthcare products Regulatory Agency who have published guidance for developers that covers the regulation in greater detail. If an algorithm falls within the remit of this regulation, the developer must then seek regulatory approval or accreditation in the form of a ‘CE’ mark before marketing it. The developer must ensure that the device meets the relevant essential requirements before applying the CE mark. These requirements include:
Benefits to the patient shall outweigh any risks.
Manufacture and design shall take account of the generally acknowledged gold standard.
Devices shall achieve the performance intended by the manufacturer.
Software must be validated according to the gold standard, taking into account the principles of development lifecycle, risk management, validation, and verification.
Confirmation of conformity must be based on clinical data; evaluation of these data must follow a defined and methodologically sound procedure.
In addition manufacturers are required to have post market surveillance provision to review experience gained from device use and to apply any necessary corrective actions.
Furthermore, the use of ML/AI algorithms might be regulated indirectly by other legislation or regulatory agencies. The highest profile additional legislative framework to be aware of might be the European Union’s General Data Protection Regulations. Others include the United Kingdom’s Care Quality Commission who are tasked with monitoring compliance with NHS Digital’s Clinical risk management standards; a contractual requirement placed on developers engaging in service provision to the UK’s health service.
However, this isn’t considered enough. Through answering 20 critical questions, spanning issues of transparency, reproducibility, ethics, and effectiveness, the BMJ aimed to identify where further work is needed to build consensus on what constitutes acceptable practice. Encouraging patients, clinicians, academics, and all manner of healthcare decision makers to ask the challenging questions raised by the BMJ will contribute to the development of safe and effective ML/AI based tools in healthcare. Developing a definitive framework for how to undertake effective and ethical research in ML/AI will involve many challenges, and the challenges facing us with regard to AI in medicine are new and different. These challenges include finding common terminology (where key terms partly or fully overlap in meaning), balancing the need for robust empirical evidence of effectiveness without stifling innovation, identifying how best to manage the many open questions regarding best practices of development and communication of results, the role of different venues of communication and reporting, simultaneously providing sufficiently detailed advice to produce actionable guidance for non-experts, and balancing the need for transparency against the risk of undermining intellectual property rights. Addressing these challenges of transparency, reproducibility, ethics, and effectiveness are important in delivering health benefits from ML/AI.
Further work is also required to identify themes of algorithmic bias and unfairness while developing mitigations to address these; to reduce brittleness and improve generalisability, and to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.
Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes.
Other issues include the fact that data is balkanized along organizational boundaries, severely constraining the ability to provide services to patients across a care continuum within one organization or across organizations. Another issue is the lack of ability to execute the successful use of AI on front line medicine; firstly simply adding AI applications to a fragmented system will not create sustainable change, secondly, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms.
There are so many other issues which I don’t have time to talk in detail about now, including who to blame when the machine makes a mistake, that ends in death; or while one may share photos on Facebook to their family, would you feel confident for AI to screen those posts for depression or maybe even suicide? Finally, in 2015 Jeremy Hunt, former UK Health Secretary, said that ‘we have the chance to make NHS patients the most powerful patients in the world – and we should leap at the opportunity.’ But this seems blind to reality, for example what about those patients who find technology a burden, who don’t have the latest iPhone and a data plan?
Therefore, if developed in an appropriate manner, AI and ML in medicine will have revolutionary impact; but regulation is required to ensure that these developments are generalised across the NHS, so as not to leave anyone behind. Furthermore, they should be maximised in terms of transparency, reproducibility, ethics, and effectiveness.