A Lightweight Generative Model for Interpretable Subject-level Prediction

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled A Lightweight Generative Model for Interpretable Subject-level Prediction, by Chiara Mauri and 4 other authors

View PDF
HTML (experimental)

Abstract:Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject’s diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause-effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at this https URL.

Submission history

From: Chiara Mauri [view email]
[v1]
Mon, 19 Jun 2023 18:20:29 UTC (2,120 KB)
[v2]
Sat, 15 Jun 2024 00:11:14 UTC (2,661 KB)
[v3]
Fri, 11 Oct 2024 14:38:07 UTC (2,664 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.