In an interview with Qaiser Malik, Living Room Health’s Clinical Director of Radiology and a renowned musculoskeletal radiologist, we discuss the utilisation of AI in diagnostic imaging. The interview highlights how AI can alleviate administrative tasks in radiology, such as scan review, enabling doctors to dedicate more time to patient interaction, providing explanations and conducting thorough reporting.
How do you think the rise of AI will impact radiologists?
Do you think that AI being used more often in medicine and healthcare will decrease human contact?
So what would happen, and who would be accountable if AI is wrong?
Do you think it's possible to avoid machine bias in diagnostic imaging?
I think it’s all under research. Because AI is programmed by people and people make mistakes, there is a risk of bias, and there is the risk of building in bias. For example, if you show an algorithm many chest X-rays from London and then go and deploy the algorithm in Scotland, you may get a different output because you put bias into the algorithm. Because in London, you’ve got a diverse population with a moving population, loads of migration and lots of ethnic minorities. Whereas somewhere more rural might not have that same population and it might perform differently. So I think that kind of inherent issues need to be looked at, and we need to be aware of them.
I don’t know what the answer is. I think we need to do more and more work and keep testing, and we need more regulation from some of the regulatory bodies. So CQC, for example, or NHS England or these kinds of organisations need to really be involved because, at the moment, we don’t have answers to some of those questions.