Training Data Protection with Compositional Diffusion Models

How to Evaluate an LLM's Ability to Follow Instructions


View a PDF of the paper titled Training Data Protection with Compositional Diffusion Models, by Aditya Golatkar and 3 other authors

View PDF
HTML (experimental)

Abstract:We introduce Compartmentalized Diffusion Models (CDM), a method to train different diffusion models (or prompts) on distinct data sources and arbitrarily compose them at inference time. The individual models can be trained in isolation, at different times, and on different distributions and domains and can be later composed to achieve performance comparable to a paragon model trained on all data simultaneously. Furthermore, each model only contains information about the subset of the data it was exposed to during training, enabling several forms of training data protection. In particular, CDMs enable perfect selective forgetting and continual learning for large-scale diffusion models, allow serving customized models based on the user’s access rights. Empirically the quality (FID) of the class-conditional CDMs (8-splits) is within 10% (on fine-grained vision datasets) of a monolithic model (no splits), and allows (8x) faster forgetting compared monolithic model with a maximum FID increase of 1%. When applied to text-to-image generation, CDMs improve alignment (TIFA) by 14.33% over a monolithic model trained on MSCOCO. CDMs also allow determining the importance of a subset of the data (attribution) in generating particular samples, and reduce memorization.

Submission history

From: Aditya Golatkar [view email]
[v1]
Wed, 2 Aug 2023 23:27:49 UTC (21,578 KB)
[v2]
Mon, 16 Oct 2023 05:09:16 UTC (21,593 KB)
[v3]
Tue, 13 Feb 2024 19:44:59 UTC (26,646 KB)
[v4]
Sun, 13 Oct 2024 22:32:43 UTC (26,646 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.