Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)
69 Downloads (Pure)

Abstract

Adversarial training is widely used to improve the robustness of deep neural networks to adversarial attack. However, adversarial training is prone to overfitting, and the cause is far from clear. This work sheds light on the mechanisms underlying overfitting through analyzing the loss landscape w.r.t. the input. We find that robust overfitting results from standard training, specifically the minimization of the clean loss, and can be mitigated by regularization of the loss gradients. Moreover, we find that robust overfitting turns severer during adversarial training partially because the gradient regularization effect of adversarial training becomes weaker due to the increase in the loss landscapes curvature. To improve robust generalization, we propose a new regularizer to smooth the loss landscape by penalizing the weighted logits variation along the adversarial direction. Our method significantly mitigates robust overfitting and achieves the highest robustness and efficiency compared to similar previous methods. Code is available at https://github.com/TreeLLi/Combating-RO-AdvLC.
Original languageEnglish
Article number109229
JournalPATTERN RECOGNITION
Volume136
Early online date8 Dec 2022
DOIs
Publication statusPublished - Apr 2023

Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'Understanding and Combating Robust Overfitting via Input Loss Landscape Analysis and Regularization'. Together they form a unique fingerprint.

Cite this