Explainable AI in medical image processing for health care

Document Type

Book Chapter

Publication Title

Explainable Artificial Intelligence in Healthcare Systems

Abstract

The integration of artificial intelligence (AI) techniques in medical image processing has revolutionized the field of healthcare. However, the lack of interpretability and transparency in traditional AI models has hindered their widespread adoption in clinical practice. Explainable AI (XAI) offers a solution to this problem by providing interpretable explanations for AI-based decisions. It is very difficult to interpret predictions made by classifiers on image data. Therefore, Explainable artificial intelligence (XAI) has gained significant attention in the field of healthcare, particularly in medical image processing. This chapter investigates the explainable AI methods in healthcare, focusing on their role in medical image processing. It uses XAI approaches such as LIME, and Grad-Cam for analyzing a Biopsy dataset. The random forest classifier achieved 96.54% accuracy better than Naive Bayes on the Biopsy data set. The GradCam approach is the fastest one and the slowest one is LIME on VGG-16. The VGG-16 is faster than ResNet-50 for image classification. The results of XAI models in medical image processing enable healthcare professionals to understand the decision-making process of AI models, leading to increased diagnostic accuracy, improved decision-making and enhanced trust in AI systems. The chapter concludes by outlining future research directions and recommendations for the integration of explainable AI in health care, emphasizing the need for interdisciplinary collaborations, standardized evaluation metrics, and regulatory frameworks to ensure the responsible and effective use of XAI in medical image processing.

First Page

235

Last Page

254

Publication Date

4-5-2024

This document is currently not available here.

Share

COinS