Exploring Selective Layer Fine-Tuning in Federated Learning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Exploring Selective Layer Fine-Tuning in Federated Learning, by Yuchang Sun and Yuexiang Xie and Bolin Ding and Yaliang Li and Jun Zhang

View PDF
HTML (experimental)

Abstract:Federated learning (FL) has emerged as a promising paradigm for fine-tuning foundation models using distributed data in a privacy-preserving manner. Under limited computational resources, clients often find it more practical to fine-tune a selected subset of layers, rather than the entire model, based on their task-specific data. In this study, we provide a thorough theoretical exploration of selective layer fine-tuning in FL, emphasizing a flexible approach that allows the clients to adjust their selected layers according to their local data and resources. We theoretically demonstrate that the layer selection strategy has a significant impact on model convergence in two critical aspects: the importance of selected layers and the heterogeneous choices across clients. Drawing from these insights, we further propose a strategic layer selection method that utilizes local gradients and regulates layer selections across clients. The extensive experiments on both image and text datasets demonstrate the effectiveness of the proposed strategy compared with several baselines, highlighting its advances in identifying critical layers that adapt to the client heterogeneity and training dynamics in FL.

Submission history

From: Yuchang Sun [view email]
[v1]
Wed, 28 Aug 2024 07:48:39 UTC (307 KB)
[v2]
Thu, 26 Sep 2024 10:26:18 UTC (127 KB)
[v3]
Tue, 26 Nov 2024 07:49:12 UTC (126 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.