Search Results for author: Lehui Xie

Found 2 papers, 0 papers with code

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

no code implementations1 Dec 2021 Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen

However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy.

Adversarial Robustness

Robust Single-step Adversarial Training with Regularizer

no code implementations5 Feb 2021 Lehui Xie, Yaopeng Wang, Jia-Li Yin, Ximeng Liu

Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch.

Cannot find the paper you are looking for? You can Submit a new open access paper.