no code implementations • 22 Oct 2020 • Zifei Zhang, Kai Qiao, Jian Chen, Ningning Liang
Experimentally, we show that our ASR of adversarial attack reaches to 58. 38% on average, which outperforms the state-of-the-art method by 12. 1% on the normally trained models and by 11. 13% on the adversarially trained models.
no code implementations • 1 Feb 2020 • Zifei Zhang, Kai Qiao, Lingyun Jiang, Linyuan Wang, Bin Yan
To alleviate the tradeoff between the attack success rate and image fidelity, we propose a method named AdvJND, adding visual model coefficients, just noticeable difference coefficients, in the constraint of a distortion function when generating adversarial examples.