Towards Adversarially Robust Dataset Distillation by Curvature Regularization

Every’s Master Plan


View a PDF of the paper titled Towards Adversarially Robust Dataset Distillation by Curvature Regularization, by Eric Xue and 5 other authors

View PDF
HTML (experimental)

Abstract:Dataset distillation (DD) allows datasets to be distilled to fractions of their original size while preserving the rich distributional information so that models trained on the distilled datasets can achieve a comparable accuracy while saving significant computational loads. Recent research in this area has been focusing on improving the accuracy of models trained on distilled datasets. In this paper, we aim to explore a new perspective of DD. We study how to embed adversarial robustness in distilled datasets, so that models trained on these datasets maintain the high accuracy and meanwhile acquire better adversarial robustness. We propose a new method that achieves this goal by incorporating curvature regularization into the distillation process with much less computational overhead than standard adversarial training. Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.

Submission history

From: Eric Xue [view email]
[v1]
Fri, 15 Mar 2024 06:31:03 UTC (5,059 KB)
[v2]
Thu, 19 Dec 2024 21:39:24 UTC (7,099 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.