TU Berlin

Quality and Usability Lab23_11_2020_Sivasubramanian

Inhalt des Dokuments

zur Navigation

Clustering as a means to support decision making in artificial intelligence - Medical Use Case : Sepsis Prediction

Location:  Zoom link (Please ask Saman Zadtootaghaj for access)  

Date/Time: 23.11.2020, 14:30-15:00 

SPEAKER:  Priya Sivasubramanian (TU Berlin)

Abstract:  With the advent of Articial Intelligence (AI) and in the context of Machine Learning (ML), remarkable solutions have been accomplished for many critical problems. Among others, prediction models have gained popularity in the recent years. Especially in the field of medicine, predictions are used for early detection and diagnosis of life-threatening diseases, due to its capacity of analyzing big and real-time data. In spite of the promising solutions and widespread adoption, most ML models are considered blackbox. This is due to the reason that understanding and interpreting ML models is not intuitive. Thus it is a challenge to make ML models acceptable. Especially for the clinicians and/or patients to rely and trust the outcomes, it is important for them to understand the predictions and the underlying reasoning made. Though explainable AI (XAI) caters to solve the problem of explainability in ML models by improving the interpretation and providing relevant model insights, they are considered to be complex in nature. Therefore using similar examples to achieve explainability in ML is highly relevant for optimized predictive models. This can be done with the help of clustering techniques. Hence we propose a method that uses similar patient identification that helps in explaining ML models using clustering.

The main focus of this work is to support decision making of AI applications based on patient similarity. Here similar patients with a similar outcome are identified based on the features that contribute to a particular disease. For experimentation, a medical use case, namely sepsis prediction has been chosen. Sepsis prediction using ML, is an important research area recently as it allows automated, scalable, and standardized detection of critical patients. For this, we propose a method that provides explainability through similar examples. The objective of this thesis is to achieve explainability of the prediction results of ML classifiers with the help of similar patient clustering (i.e. to find similar groups of patients) thus, making ML more understandable and adoptable in the application domain. This is achieved by using KNN and a variation of KNN which takes into account feature importance of relevant features. Experiments were conducted and the results were evaluated using conventional performance metrics namely accuracy, precision and recall.



Schnellnavigation zur Seite über Nummerneingabe