Beyond the Black Box: Do More Complex Deep Learning Models Provide Superior XAI Explanations?

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Beyond the Black Box: Do More Complex Deep Learning Models Provide Superior XAI Explanations?, by Mateusz Cedro and 1 other authors

View PDF
HTML (experimental)

Abstract:The increasing complexity of Artificial Intelligence models poses challenges to interpretability, particularly in the healthcare sector. This study investigates the impact of deep learning model complexity and Explainable AI (XAI) efficacy, utilizing four ResNet architectures (ResNet-18, 34, 50, 101). Through methodical experimentation on 4,369 lung X-ray images of COVID-19-infected and healthy patients, the research evaluates models’ classification performance and the relevance of corresponding XAI explanations with respect to the ground-truth disease masks. Results indicate that the increase in model complexity is associated with a decrease in classification accuracy and AUC-ROC scores (ResNet-18: 98.4%, 0.997; ResNet-101: 95.9%, 0.988). Notably, in eleven out of twelve statistical tests performed, no statistically significant differences occurred between XAI quantitative metrics – Relevance Rank Accuracy and the proposed Positive Attribution Ratio – across trained models. These results suggest that increased model complexity does not consistently lead to higher performance or relevance of explanations for models’ decision-making processes.

Submission history

From: Mateusz Cedro [view email]
[v1]
Tue, 14 May 2024 14:35:35 UTC (3,682 KB)
[v2]
Sat, 5 Oct 2024 16:36:09 UTC (3,682 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.