top of page
  • Saylee Jangam, Des Conroy & Saira Ghafur

Improving the adoption of AI: clinician perspectives

Clinicians expect sufficient regulation and robust validation of AI-based medical devices. There remains a need for clear, AI-specific regulatory pathways and clear guidance on the safe and effective use of AI tools in healthcare.

Key messages:

  • Challenges to adoption of AI in healthcare include a lack of clinician confidence, concerns about data privacy and security, access to high-quality data, and integrating qualitative health data into AI algorithms.

  • Applications of AI discussed here include breast cancer screening tools and AI-assisted drug discovery. Generative AI has potential in diagnostics, treatment planning, and medical research.

  • Clinician confidence in AI technologies is crucial for successful adoption. Clinician concerns relate to a lack of training, efficacy and clinical evidence for AI tools.

Introduction

In recent years, the rapid evolution of artificial intelligence (AI) and generative AI (gen AI) has propelled innovation across multiple industries. In healthcare, novel AI solutions are being created to address some of the biggest challenges in the prognosis, diagnosis, and treatment of disease, as well as clinician workflows and service improvement. As an example in diagnostics, radiological screening has seen some of the largest growth in AI in healthcare in recent years. Some of the other emerging uses of AI in healthcare include optimisation of reimbursement, clinical operations, drug discovery, and improvement in the quality and safety of care. ¹

Comparatively, the adoption of AI in healthcare is slower than in other industries, and this is partly due to a lack of clinician confidence in AI solutions, concerns about data privacy and security, and complexities in clinical decision making. Another hurdle that has slowed down the adoption of AI technologies is the task of integrating qualitative health data, such as clinical notes, into AI algorithms for clinical outputs. There are also challenges in reimbursing AI solutions in healthcare, as well as in getting these solutions through regulatory approval. Once approved, the AI solution needs to be implemented in existing clinical workflows, which requires change management and acceptance by clinical staff. These solutions require evidence for the underlying algorithm, but will also require evidence to ensure that the function in a healthcare setting results in the promised outcomes.


Applications of AI in healthcare

UK-based Kheiron Medical’s AI breast cancer screening tool, the Mammography Intelligent Assistant (Mia), is the first deep learning software developed by a UK company to receive the CE mark for a radiology application. Addressing the current workforce crisis in radiology and the screening backlog created by the COVID-19 pandemic, Mia is an AI platform that screens for cancerous tissue in mammography scans, reducing the workload of radiologists. In a study of 280,594 women, the Mia tool was used as a second reader and reduced the need for a radiologist’s second-reading by 87%. Moreover, where the tool was used as an independent reader, the number of cases referred for further review and arbitration were reduced from 13% to 2%. ²

Another example is that AI has also been used to accelerate drug discovery and design by pharmaceutical companies such as Insilico Medicine, who have taken a novel drug target from the drug discovery phase to preclinical candidate stage in 18 months with reduced costs overall. ³ To overcome the relatively slow development of anti-cancer drugs in the R&D pipeline, AI models can support and improve the process of drug design by identifying and validating novel targets, designing inhibitors to show efficacy, and using high throughput screening as a combinatorial approach to accelerate development. ⁴


Applications of generative AI in healthcare

Building on the AI revolution, the emergence of generative AI presents a compelling opportunity to revolutionise diagnostics, treatment planning, medical research, and patient engagement. The hypothesised uses of generative AI are broad, ranging from generating synthetic patient data for the validation of AI tools, to the analysis of continuous data from wearables to detect early signs of cardiovascular disease.


Challenges in adopting AI technologies

As promising as the prospects of AI in healthcare are, these technologies come with their own challenges. These include the need for clinician and patient engagement, the necessary access to sufficient amounts of high-quality data to build algorithms, data privacy and security, as well as the need for robust quality and regulatory frameworks to ensure the safe-use of AI in healthcare.


Clinician perspectives and engagement

A recent study by Health Education England found that clinician confidence in AI is largely dependent on how AI solutions are governed. ⁵ Clinicians expect sufficient levels of regulation and robust validation of AI-based medical devices; the study highlighted the need for a robust, AI-specific medical device regulation pathway, and guidance on the safe and effective use of AI tools. In addition, it highlighted the need for formal evidence requirements in the validation of such tools, and specific pathways for the prospective clinical studies focused on these new technologies. Another challenge highlighted by the study was the need for mitigating model bias and ensuring fairness in AI-driven diagnostics and treatments by ensuring training data are representative of real-world populations.

Clinician confidence and trust in AI technologies is paramount to the successful adoption of AI tools in healthcare. A recent study by Ipsos, involving 3,427 clinicians across 20 countries, explored clinicians’ confidence in and willingness to adopt AI tools. More than two thirds (68%) of the physicians surveyed globally were excited about the role of AI in healthcare, and thought that improved efficiency and accuracy in diagnosis will be a key benefit of these solutions. ⁶ 80% of doctors globally believed that digital health can enable patients to proactively manage their health, however, 60% were concerned that patients may misinterpret the data from wearable and connected health devices, highlighting the need for accessible clinical services to ensure data from consumer-facing devices are correctly interpreted and effectively used.

While clinicians were excited about the future of AI in healthcare, only 31% have used an AI tool in practice in the last year. ⁶ The main concerns that clinicians have about adopting AI technologies are a lack of training (62%), doubts about efficacy (48%), and a lack of clinical evidence validating these tools (45%). This highlights the importance for evidence generation streams for AI solutions, as well as the need for education and training for healthcare professionals (HCPs) to promote increased adoption. Educating both clinicians and patients on digital health and AI tools is equally important for adoption and effective use.

Clinician concerns about AI in healthcare: lack of robust regulation; lack of clinical evidence and guidelines, integration into clinical workflows; lack of education and training; data privacy and security

Some of the challenges in clinician education around AI tools are addressed by establishing effective relationships between industry innovators and HCPs, as reported by Health Education England. ⁵ As with other digital solutions, ensuring that clinicians are actively brought into the development and validation process for AI solutions improves their confidence in the technology, garners buy-in and ultimately, improves engagement with these tools down the line. In large organisations like the NHS, which can be slow to adapt to technological change, thoughtful leadership, change management and a continuous education programme – all play a role in improving confidence in novel technologies.

Ensuring that healthcare professionals are engaged with these tools and have confidence in them will ultimately lead to the delivery of better care for patients and ensure that any novel solutions will be used to their maximal potential. Innovators also need to keep this in mind as they develop their products and should consider learning tools to accompany the roll out of any new product.


Prova Health supports digital health innovators with evidence generation. To discuss how we can help with evidence generation for your digital solutions, email hello@provahealth.com


Saylee Jangam is Digital Health Consultant at Prova Health. Saylee is a biosensor and wearables researcher at Imperial College London and holds a BEng in Bioengineering from the University of Sheffield. As part of her PhD, she has led phase I clinical validation of wearable sensors and published results in high impact journals. She has experience with medical device R&D in the FMCG industry.


Dr Des Conroy is a Digital Health Consultant at Prova Health. As a medical doctor, he has worked in clinical practice in the UK and Ireland. He has experience developing and clinically validating artificial intelligence-based Software as a Medical Device (SaMD) products, and supporting their deployment at a global scale. At Prova Health, he has led research into evolving evidence standards and reimbursement models in digital health.


Dr Saira Ghafur is Co-founder and Chief Medical Officer of Prova Health. She is an honorary consultant Respiratory Physician at St Mary’s Hospital, London, and a digital health expert who has published on topics such as cybersecurity, digital health adoption and reimbursement, data privacy and commercialising health data. She is Co-founder of mental health start-up Psyma and holds a MSc in Health Policy from Imperial. She was a Harkness Fellow in Health Policy and Practice in New York (2017).

References:



Comments


bottom of page