Abstract
Explainable Artificial Intelligence (XAI) plays a crucial role in the field of medical imaging, where AI systems are used for clinical decision support and diagnostic processes. XAI aims to develop approaches that make machine learning (ML) models more transparent and interpretable, facilitating human-AI collaboration and improving trust. In medical imaging, early prediction of anomalies is vital, and understanding AI's decision-making process is crucial. Saliency maps are used to highlight important regions in an image and have been found a user-friendly explanation method for deep learning-based imaging tasks. They are widely used in many applications across various domains. There are different methods for generating saliency maps depending on the analysis and temporal occurrence. Ad-hoc methods are model-specific, while ante-hoc and post-hoc methods are independent of the model architecture. Post-hoc methods, such as activation-based, perturbation-based, and gradient-based methods, are commonly used for generating saliency maps. In this case study, we focus on the application of gradient-based saliency maps using Magnetic Resonance Imaging (MRI) images to provide insights into brain tumor classification. To achieve this, we implemented a convolutional neural network (CNN) model on a benchmark brain MRI dataset and generated saliency maps. The results reveal that the tumor and its surrounding pixels play a significant role in the classification of brain MRIs, highlighting the importance of tumor shape in the classification process. Understanding these underlying mechanisms enhances the robustness, reliability, and accountability of AI systems used in brain tumor detection and classification.
Original language | English |
---|---|
Title of host publication | Irish Machine Vision and Image Processing Conference 2023 (IMVIP2023) |
Volume | 5 |
Publication status | Published - 2023 |