View a PDF of the paper titled Federated Instruction Tuning of LLMs with Domain Coverage Augmentation, by Zezhou Wang and 3 other authors
Abstract:Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with server-side public data for instruction augmentation, ultimately boosting model performance within specific domains. To date, the factors affecting FedDIT remain unclear, and existing instruction augmentation methods primarily focus on the centralized setting without considering distributed environments. Our experiments reveal that the cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT. In response, we propose FedDCA, which optimizes domain coverage through greedy client center selection and retrieval-based augmentation. For client-side computational efficiency and system scalability, FedDCA$^*$, the variant of FedDCA, utilizes heterogeneous encoders with server-side feature alignment. Extensive experiments across four distinct domains (code, medical, financial, and mathematical) substantiate the effectiveness of both methods. Additionally, we investigate privacy preservation against memory extraction attacks utilizing various amounts of public data. Results show that there is no significant correlation between the volume of public data and the privacy-preserving capability. However, as the fine-tuning rounds increase, the risk of privacy leakage reduces or converges.
Submission history
From: Zezhou Wang [view email]
[v1]
Mon, 30 Sep 2024 09:34:31 UTC (14,613 KB)
[v2]
Tue, 1 Oct 2024 05:37:07 UTC (14,598 KB)
[v3]
Wed, 2 Oct 2024 08:32:02 UTC (14,591 KB)
[v4]
Fri, 11 Oct 2024 12:19:57 UTC (14,599 KB)
Source link
lol