View a PDF of the paper titled Navigating the Designs of Privacy-Preserving Fine-tuning for Large Language Models, by Haonan Shi and 2 other authors
Abstract:Instruction tuning has proven effective in enhancing Large Language Models’ (LLMs) performance on downstream tasks. However, real-world fine-tuning faces inherent conflicts between model providers’ intellectual property protection, clients’ data privacy requirements, and tuning costs. While recent approaches like split learning and offsite tuning demonstrate promising architectures for privacy-preserving fine-tuning, there is a gap in systematically addressing the multidimensional trade-offs required for diverse real-world deployments. We propose several indicative evaluation metrics to guide design trade-offs for privacy-preserving fine-tuning and a series of example designs, collectively named GuardedTuning; they result from novel combinations of system architectures with adapted privacy-enhancement methods and emerging computation techniques. Each design represents distinct trade-offs across model utility, privacy guarantees, and costs. Experimental results demonstrate that these designs protect against data reconstruction attacks while maintaining competitive fine-tuning performance.
Submission history
From: Haonan Shi [view email]
[v1]
Wed, 8 Jan 2025 07:47:43 UTC (342 KB)
[v2]
Thu, 9 Jan 2025 02:33:04 UTC (342 KB)
Source link
lol