View a PDF of the paper titled Tackling the Local Bias in Federated Graph Learning, by Binchi Zhang and 5 other authors
Abstract:Federated graph learning (FGL) has become an important research topic in response to the increasing scale and the distributed nature of graph-structured data in the real world. In FGL, a global graph is distributed across different clients, where each client holds a subgraph. Existing FGL methods often fail to effectively utilize cross-client edges, losing structural information during the training; additionally, local graphs often exhibit significant distribution divergence. These two issues make local models in FGL less desirable than in centralized graph learning, namely the local bias problem in this paper. To solve this problem, we propose a novel FGL framework to make the local models similar to the model trained in a centralized setting. Specifically, we design a distributed learning scheme, fully leveraging cross-client edges to aggregate information from other clients. In addition, we propose a label-guided sampling approach to alleviate the imbalanced local data and meanwhile, distinctly reduce the training overhead. Extensive experiments demonstrate that local bias can compromise the model performance and slow down the convergence during training. Experimental results also verify that our framework successfully mitigates local bias, achieving better performance than other baselines with lower time and memory overhead.
Submission history
From: Binchi Zhang [view email]
[v1]
Fri, 22 Oct 2021 08:22:36 UTC (248 KB)
[v2]
Mon, 12 Aug 2024 00:10:48 UTC (610 KB)
[v3]
Sun, 25 Aug 2024 06:19:22 UTC (607 KB)
Source link
lol