Why Does Dropping Edges Usually Outperform Adding Edges in Graph Contrastive Learning?

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Why Does Dropping Edges Usually Outperform Adding Edges in Graph Contrastive Learning?, by Yanchen Xu and 3 other authors

View PDF
HTML (experimental)

Abstract:Graph contrastive learning (GCL) has been widely used as an effective self-supervised learning method for graph representation learning. However, how to apply adequate and stable graph augmentation to generating proper views for contrastive learning remains an essential problem. Dropping edges is a primary augmentation in GCL while adding edges is not a common method due to its unstable performance. To our best knowledge, there is no theoretical analysis to study why dropping edges usually outperforms adding edges. To answer this question, we introduce a new metric, namely Error Passing Rate (EPR), to quantify how a graph fits the network. Inspired by the theoretical conclusions and the idea of positive-incentive noise, we propose a novel GCL algorithm, Error-PAssing-based Graph Contrastive Learning (EPAGCL), which uses both edge adding and edge dropping as its augmentations. To be specific, we generate views by adding and dropping edges based on the weights derived from EPR. Extensive experiments on various real-world datasets are conducted to validate the correctness of our theoretical analysis and the effectiveness of our proposed algorithm. Our code is available at: this https URL.

Submission history

From: Yanchen Xu [view email]
[v1]
Wed, 11 Dec 2024 06:31:06 UTC (1,010 KB)
[v2]
Fri, 13 Dec 2024 02:12:40 UTC (1,010 KB)
[v3]
Fri, 20 Dec 2024 06:38:47 UTC (908 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.