Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning, by Ziyan Wang and 2 other authors

View PDF
HTML (experimental)

Abstract:Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains. However, their applicability is constrained by the dependence on the observed graph structure. To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features and then apply a GNN to produce predictions. Even so, existing approaches neglect the noise from node features, which affects generated graph structure and performance. In this work, we introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes based on observed graph. Furthermore, we continue to utilize the LGI framework, inputting the refined graph structure and node features into GNNs to obtain predictions. We name the proposed scheme as Probability Passing-based Graph Neural Network (PPGNN). Moreover, the anchor-based technique is employed to reduce complexity and improve efficiency. Experimental results demonstrate the effectiveness of the proposed method.

Submission history

From: Ziyan Wang [view email]
[v1]
Mon, 15 Jul 2024 13:01:47 UTC (190 KB)
[v2]
Wed, 18 Sep 2024 07:27:51 UTC (190 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.