An Explainable Deep Learning Framework for Medical Image Diagnosis Using Attention Mechanisms
DOI:
https://doi.org/10.9999/ijair.v1i1.6Keywords:
medical imaging; explainable AI; attention; weakly supervised localization; uncer- tainty.Abstract
Attention mechanisms are widely used to improve the performance of deep neural networks and to provide spatial cues that are often interpreted as explanations. In medical image diag- nosis, however, reliable explanations require more than visually appealing heatmaps: they must be stable under perturbations, aligned with clinically meaningful regions, and accompanied by uncertainty-aware decision outputs.
This paper presents an explainable deep learning framework for medical image diagnosis that integrates (i) an attention-based diagnostic backbone, (ii) multi-scale attention aggregation for lesion localization, (iii) calibration and uncertainty reporting for risk-aware triage, and (iv) a set of quantitative explainability checks that go beyond qualitative visualization.
The framework is designed as a practical template that can be instantiated for common diagnostic tasks (classification, weakly supervised localization, and segmentation-assisted clas- sification). We describe the modeling choices, training objectives, evaluation protocol, and ablation studies, and we discuss failure modes and deployment considerations.
Downloads
Published
Versions
- 2026-01-30 (2)
- 2026-01-30 (1)
How to Cite
Issue
Section
License
This article is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.