View a PDF of the paper titled ModelLock: Locking Your Model With a Spell, by Yifeng Gao and 3 other authors
Abstract:This paper presents a novel model protection paradigm ModelLock that locks (destroys) the performance of a model on normal clean data so as to make it unusable or unextractable without the right key. Specifically, we proposed a diffusion-based framework dubbed ModelLock that explores text-guided image editing to transform the training data into unique styles or add new objects in the background. A model finetuned on this edited dataset will be locked and can only be unlocked by the key prompt, i.e., the text prompt used to transform the data. We conduct extensive experiments on both image classification and segmentation tasks, and show that 1) ModelLock can effectively lock the finetuned models without significantly reducing the expected performance, and more importantly, 2) the locked model cannot be easily unlocked without knowing both the key prompt and the diffusion model. Our work opens up a new direction for intellectual property protection of private models.
Submission history
From: Yifeng Gao [view email]
[v1]
Sat, 25 May 2024 15:52:34 UTC (5,155 KB)
[v2]
Sat, 28 Sep 2024 11:29:16 UTC (5,155 KB)
[v3]
Sun, 13 Oct 2024 17:20:03 UTC (8,187 KB)
Source link
lol