Enhancing brain tumor detection through custom convolutional neural networks and interpretability-driven analysis

Kavinda Ashan Kulasinghe Wasalamuni Dewage, Raza Hasan, Bacha Rehman, Salman Mahmood

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

Brain tumor detection is crucial for effective treatment planning and improved patient outcomes. However, existing methods often face challenges, such as limited interpretability and class imbalance in medical-imaging data. This study presents a novel, custom Convolutional Neural Network (CNN) architecture, specifically designed to address these issues by incorporating interpretability techniques and strategies to mitigate class imbalance. We trained and evaluated four CNN models (proposed CNN, ResNetV2, DenseNet201, and VGG16) using a brain tumor MRI dataset, with oversampling techniques and class weighting employed during training. Our proposed CNN achieved an accuracy of 94.51%, outperforming other models in regard to precision, recall, and F1-Score. Furthermore, interpretability was enhanced through gradient-based attribution methods and saliency maps, providing valuable insights into the model’s decision-making process and fostering collaboration between AI systems and clinicians. This approach contributes a highly accurate and interpretable framework for brain tumor detection, with the potential to significantly enhance diagnostic accuracy and personalized treatment planning in neuro-oncology.
Original languageEnglish
Number of pages32
JournalInformation (Switzerland)
Volume15
Issue number10
DOIs
Publication statusPublished - 18 Oct 2024

Cite this