Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish - PyImageSearch


View a PDF of the paper titled Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step, by Mingyuan Zhou and Huangjie Zheng and Yi Gu and Zhendong Wang and Hai Huang

View PDF
HTML (experimental)

Abstract:Score identity Distillation (SiD) is a data-free method that has achieved SOTA performance in image generation by leveraging only a pretrained diffusion model, without requiring any training data. However, its ultimate performance is constrained by how accurate the pretrained model captures the true data scores at different stages of the diffusion process. In this paper, we introduce SiDA (SiD with Adversarial Loss), which not only enhances generation quality but also improves distillation efficiency by incorporating real images and adversarial loss. SiDA utilizes the encoder from the generator’s score network as a discriminator, allowing it to distinguish between real images and those generated by SiD. The adversarial loss is batch-normalized within each GPU and then combined with the original SiD loss. This integration effectively incorporates the average “fakeness” per GPU batch into the pixel-based SiD loss, enabling SiDA to distill a single-step generator. SiDA converges significantly faster than its predecessor when distilled from scratch, and swiftly improves upon the original model’s performance during fine-tuning from a pre-distilled SiD generator. This one-step adversarial distillation method establishes new benchmarks in generation performance when distilling EDM diffusion models, achieving FID scores of 1.110 on ImageNet 64×64. When distilling EDM2 models trained on ImageNet 512×512, our SiDA method surpasses even the largest teacher model, EDM2-XXL, which achieved an FID of 1.81 using classifier-free guidance (CFG) and 63 generation steps. In contrast, SiDA achieves FID scores of 2.156 for size XS, 1.669 for S, 1.488 for M, 1.413 for L, 1.379 for XL, and 1.366 for XXL, all without CFG and in a single generation step. These results highlight substantial improvements across all model sizes. Our code is available at this https URL.

Submission history

From: Mingyuan Zhou [view email]
[v1]
Sat, 19 Oct 2024 00:33:51 UTC (31,102 KB)
[v2]
Thu, 31 Oct 2024 16:36:14 UTC (31,362 KB)
[v3]
Wed, 20 Nov 2024 17:20:00 UTC (37,876 KB)
[v4]
Tue, 24 Dec 2024 05:06:20 UTC (37,877 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.