View a PDF of the paper titled Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off, by Futa Waseda and 2 other authors
Abstract:Adversarial training often suffers from a robustness-accuracy trade-off, where achieving high robustness comes at the cost of accuracy. One approach to mitigate this trade-off is leveraging invariance regularization, which encourages model invariance under adversarial perturbations; however, it still leads to accuracy loss. In this work, we closely analyze the challenges of using invariance regularization in adversarial training and understand how to address them. Our analysis identifies two key issues: (1) a “gradient conflict” between invariance and classification objectives, leading to suboptimal convergence, and (2) the mixture distribution problem arising from diverged distributions between clean and adversarial inputs. To address these issues, we propose Asymmetric Representation-regularized Adversarial Training (ARAT), which incorporates asymmetric invariance loss with stop-gradient operation and a predictor to avoid gradient conflict, and a split-BatchNorm (BN) structure to resolve the mixture distribution problem. Our detailed analysis demonstrates that each component effectively addresses the identified issues, offering novel insights into adversarial defense. ARAT shows superiority over existing methods across various settings. Finally, we discuss the implications of our findings to knowledge distillation-based defenses, providing a new perspective on their relative successes.
Submission history
From: Futa Waseda [view email]
[v1]
Thu, 22 Feb 2024 15:53:46 UTC (2,007 KB)
[v2]
Wed, 29 May 2024 02:30:40 UTC (3,203 KB)
[v3]
Thu, 23 Jan 2025 10:21:52 UTC (9,346 KB)
Source link
lol