Improving global awareness of linkset predictions using Cross-Attentive Modulation tokens

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Improving global awareness of linkset predictions using Cross-Attentive Modulation tokens, by F’elix Marcoccia and 2 other authors

View PDF
HTML (experimental)

Abstract:This work introduces Cross-Attentive Modulation (CAM) tokens, which are tokens whose initial value is learned, gather information through cross-attention, and modulate the nodes and edges accordingly. These tokens are meant to improve the global awareness of link predictions models which, based on graph neural networks, can struggle to capture graph-level features. This lack of ability to feature high level representations is particularly limiting when predicting multiple or entire sets of links. We implement CAM tokens in a simple attention-based link prediction model and in a graph transformer, which we also use in a denoising diffusion framework. A brief introduction to our toy datasets will then be followed by benchmarks which prove that CAM token improve the performance of the model they supplement and outperform a baseline with diverse statistical graph attributes.

Submission history

From: Félix Marcoccia [view email]
[v1]
Tue, 28 May 2024 22:25:17 UTC (164 KB)
[v2]
Tue, 18 Jun 2024 12:51:49 UTC (164 KB)
[v3]
Wed, 21 Aug 2024 15:21:42 UTC (164 KB)
[v4]
Fri, 20 Sep 2024 10:17:50 UTC (164 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.