AI-driven tools in healthcare: A visual guideline for trustworthy treatment decision support systems

Project website: https://www.um.es/astuteness/

Project lead: Jose M Juarez, University of Murcia

Participating universities: University of Murcia, Nantes Université, Linnaeus University, Semmelweis University

General Overview

The aim of this project is to deliver an analytical guideline based on real use cases to improve confidence in the Clinical Decision Support Systems (CDSS). This shall be the first version of a guideline for trustworthy medical Artificial Intelligence. The guideline will be used as a tool to identify knowledge gaps and opportunities, leading to recommendations for further research. 

Purpose and Significance

The unprecedented global pandemic and recent recession threats are severely affecting health systems and clinicians, causing an increase in workloads and health issues. Despite the professional fatigue and differing years of experience, every day hospital doctors still make hundreds of decisions about the best treatment for their patients. An inappropriate therapeutic decision may imply severe effects to the patient and the hospital population. Examples are the well-known drug related problems (adverse drug effects, overdosage, etc.) and antimicrobial resistance (lack of effectiveness of antibiotics). Maintaining the quality of care under these conditions and providing a resilient healthcare system is a great challenge. 

Decision support tools currently in the software market help in general, providing recommendations based on clinical guidelines. However, doctors need to interpret the therapeutic suggestions and need to adapt them to the local epidemiology and hospital conditions. Recent milestones of Artificial Intelligence, such as ChatGPT, have proven the potential of Machine Learning by processing massive information from Big Data. Nevertheless, the way society views the seemingly boundless potential of Machine Learning in handling unsupervised clinical data manipulation is evident in both social scepticism and regulatory actions within Europe.

Implementation Method and Timeline

The first project task will be to assess scope for deep learning models to predict potential drug-related problems (DRPs). This will lead to a report on Supervised Deep Learning for DRPs. 

The second task will be to explore the potential of pattern mining methods to extract underlying knowledge from a clinical dataset in order to tackle the problem of antimicrobial resistance. This task will also lead to a report. 

In task three, mechanisms to measure the trust invested in AI services will be proposed from a social perspective. A report will be produced on the Measurement of Trust on AI and pilot. 

Finally, a guideline will be developed: the Trustworthy Artificial Intelligence Guideline in the Health and Wellness Context. The project will also develop a dissemination and communication strategy to share its findings. 

Expected Outcomes

The project will increase understanding of the benefits and limitations of Machine Learning for improving CDSSs for treatment problems. It will increase awareness of the importance of trust in AI-based systems. It will also generate a pilot of a collaborative methodology for analysing AI technology in the healthcare field, with the potential to be applied to other healthcare contexts and settings. In addition, by strengthening the partnership between healthcare systems in the EUniWell consortium, partners hope that the project will lead to future collaborative projects and initiatives.