no code implementations • 25 May 2024 • Runqi Lin, Chaojian Yu, Bo Han, Hang Su, Tongliang Liu
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks.
2 code implementations • NeurIPS 2023 • Runqi Lin, Chaojian Yu, Tongliang Liu
Specifically, we design a novel method, termed Abnormal Adversarial Examples Regularization (AAER), which explicitly regularizes the variation of AAEs to hinder the classifier from becoming distorted.
1 code implementation • 13 Oct 2023 • Runqi Lin, Chaojian Yu, Bo Han, Tongliang Liu
In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting.