1 code implementation • 22 Oct 2023 • Minxuan Lv, Chengwei Dai, Kun Li, Wei Zhou, Songlin Hu
Neural network models are vulnerable to adversarial examples, and adversarial transferability further increases the risk of adversarial attacks.
1 code implementation • 21 Oct 2023 • Chengwei Dai, Minxuan Lv, Kun Li, Wei Zhou
We study model extraction attacks in natural language processing (NLP) where attackers aim to steal victim models by repeatedly querying the open Application Programming Interfaces (APIs).