AFD: Mitigating Feature Gap for Adversarial Robustness by Feature Disentanglement

Every’s Master Plan


View a PDF of the paper titled AFD: Mitigating Feature Gap for Adversarial Robustness by Feature Disentanglement, by Nuoyan Zhou and 4 other authors

View PDF
HTML (experimental)

Abstract:Adversarial fine-tuning methods enhance adversarial robustness via fine-tuning the pre-trained model in an adversarial training manner. However, we identify that some specific latent features of adversarial samples are confused by adversarial perturbation and lead to an unexpectedly increasing gap between features in the last hidden layer of natural and adversarial samples. To address this issue, we propose a disentanglement-based approach to explicitly model and further remove the specific latent features. We introduce a feature disentangler to separate out the specific latent features from the features of the adversarial samples, thereby boosting robustness by eliminating the specific latent features. Besides, we align clean features in the pre-trained model with features of adversarial samples in the fine-tuned model, to benefit from the intrinsic features of natural samples. Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.

Submission history

From: Nuoyan Zhou [view email]
[v1]
Fri, 26 Jan 2024 08:38:57 UTC (2,100 KB)
[v2]
Tue, 10 Dec 2024 16:28:07 UTC (930 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.