no code implementations • 28 May 2024 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
Nevertheless, due to the discrete nature of texts, the input gradient of LLMs struggles to precisely reflect the magnitude of loss change that results from token replacements in the prompt, leading to limited attack success rates against safety-aligned LLMs, even in the white-box setting.
1 code implementation • 12 Oct 2023 • Zehao Wang, Yiwen Guo, Qizhang Li, Guanglei Yang, WangMeng Zuo
Most existing data augmentation methods tend to find a compromise in augmenting the data, \textit{i. e.}, increasing the amplitude of augmentation carefully to avoid degrading some data too much and doing harm to the model performance.
no code implementations • 21 Jul 2023 • Qizhang Li, Yiwen Guo, Xiaochen Yang, WangMeng Zuo, Hao Chen
Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.
2 code implementations • NeurIPS 2023 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously.
1 code implementation • 10 Feb 2023 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.
1 code implementation • 18 Jul 2022 • Qiying Yu, Jieming Lou, Xianyuan Zhan, Qizhang Li, WangMeng Zuo, Yang Liu, Jingjing Liu
Contrastive learning (CL) has recently been applied to adversarial learning tasks.
1 code implementation • 23 May 2022 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community.
1 code implementation • 21 Mar 2022 • Yiwen Guo, Qizhang Li, WangMeng Zuo, Hao Chen
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.
no code implementations • 6 Mar 2022 • Yuanze Li, Yiwen Guo, Qizhang Li, Hongzhi Zhang, WangMeng Zuo
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
1 code implementation • NeurIPS 2020 • Yiwen Guo, Qizhang Li, Hao Chen
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community.
2 code implementations • NeurIPS 2020 • Qizhang Li, Yiwen Guo, Hao Chen
We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective.
2 code implementations • ECCV 2020 • Qizhang Li, Yiwen Guo, Hao Chen
The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.