No, this post isn’t about War of the Worlds or another pop-culture reference. It’s about Artificial Intelligence in healthcare. Possibly scarier than any sci-fi movie…?
My interest in talking about AI has little to do with the technology, and everything to do with the impact on people. I think the potential that it has to advance the delivery of patient-centred care is very exciting. On the flip-side, I also think it’s going to completely rock the world of the healthcare workforce and I have some concerns about how that’s going to go down.
Over the past two weeks there have been two articles of Australian origins that piqued my interest. For a pleasant change, both of them are freely available (for the moment at least).
The first paper I want to highlight is from the MJA. It takes a look at some of the areas in health where AI has been making progress and discusses some of the challenges associated with it.
There are three areas discussed:
- AI-assisted image interpretation e.g. image-based diagnosis in radiology, pathology and endoscopy
- AI-assisted diagnosis e.g. analysis of ECGs
- AI-assisted prediction and prognostication e.g. use of clinical datasets, genomic information and medical images
Some of the challenges they bring attention to:
- Health disparities, excluded populations and data biases - the algorithms will only be as good as the quality of the information they are based upon (this is discussed in more detail here)
- Data sovereignty and stewardship - most of the progress in AI is within the private sector, but the clinical data is within the public sector. Solid governance is crucial.
- Changing standards of care - “if AI keeps its promise of benefit and it is integrated more into practice, standards of care must require AI use, and traditional forms of therapeutics will be forced to change…those who refuse to partner with AI might be replaced by it”
- Legal responsibility for AI-caused injury - medico-legal considerations as responsibility becomes shared between human doctor and AI
The second study delves a bit deeper into those last two points (changing standards of care and legal responsibility) by exploring the views of GPs on the potential role of AI in primary care. It’s a qualitative study from Macquarie University’s Australian Institute of Health Innovation, published in the Journal of the American Medical Informatics Association.
In the study, GPs were asked to participate in a co-design workkshop, part of which included watching this 2 minute video from Microsoft about it’s prototype which captures doctor-patient interactions to document medical notes. Watch it and you will see that this isn’t just a fancy dictation system…
They found three main themes:
- Professional autonomy should be a system design consideration - GPs were comfortable with AI undertaking the role of assistant but less so with AI undertaking the role of auditor or supervisor and how this may influence their practice and the medico-legal associated with it.
- Different human-AI collaboration models need to be investigated - “accurate diagnosis is only part of a patient’s care: human communication is needed for different patient cases and sensitive situations” (gets back to the image from the first paper about levels of AI automation)
- AI can support new models of care - distributed, mobile and virtual models of care. E.g. pre-consultation activities in which AI gathers and analyses information to generate a summary of the patient’s reason for visiting, with a preliminary assessment (this reminds me a lot of when I worked in a multidisciplinary assessment clinic…)
What both of these papers say to me is that AI is poised to make a massive impact in the area of clinical decision support which will be felt across the entirety of the health ecosystem. There is work to do in figuring out how and to what extent that happens, but it is going to happen.
It makes me wonder what impact it’s going to have on multidisciplinary team care - is it likely to enhance it or detract from it? Will the reduction in administrative work translate to an increase in collaboration occurring in primary care? Or given that the professional turf wars are already so ingrained within, will AI be seen as more desirable than seeking input from other professions or disciplines?
What do you think?