1 code implementation • 20 Mar 2024 • Zixuan Wang, Jia Jia, Shikun Sun, Haozhe Wu, Rong Han, Zhenyu Li, Di Tang, Jiaqing Zhou, Jiebo Luo
However, camera movement synthesis with music and dance remains an unsolved challenging problem due to the scarcity of paired data.
no code implementations • 29 Jan 2023 • Rui Zhu, Di Tang, Siyuan Tang, Guanhong Tao, Shiqing Ma, XiaoFeng Wang, Haixu Tang
Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks against the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.
no code implementations • 9 Dec 2022 • Rui Zhu, Di Tang, Siyuan Tang, XiaoFeng Wang, Haixu Tang
Our idea is to retrain a given DNN model on randomly labeled clean data, to induce a CF on the model, leading to a sudden forget on both primary and backdoor tasks; then we recover the primary task by retraining the randomized model on correctly labeled clean data.
no code implementations • 12 Oct 2022 • Di Tang, Rui Zhu, XiaoFeng Wang, Haixu Tang, Yi Chen
With extensive studies on backdoor attack and detection, still fundamental questions are left unanswered regarding the limits in the adversary's capability to attack and the defender's capability to detect.
1 code implementation • 14 Sep 2022 • Jiawei Liu, Yangyang Kang, Di Tang, Kaisong Song, Changlong Sun, XiaoFeng Wang, Wei Lu, Xiaozhong Liu
In this study, we propose an imitation adversarial attack on black-box neural passage ranking models.
no code implementations • 31 Aug 2019 • Shuaike Dong, Zhou Li, Di Tang, Jiongyi Chen, Menghan Sun, Kehuan Zhang
However, in the meantime, such a fast-growing technology has also introduced new privacy issues, which need to be better understood and measured.
Cryptography and Security
1 code implementation • 2 Aug 2019 • Di Tang, Xiao-Feng Wang, Haixu Tang, Kehuan Zhang
A security threat to deep neural networks (DNN) is backdoor contamination, in which an adversary poisons the training data of a target model to inject a Trojan so that images carrying a specific trigger will always be classified into a specific label.
Cryptography and Security
no code implementations • 13 Mar 2018 • Zhe Zhou, Di Tang, Xiao-Feng Wang, Weili Han, Xiangyu Liu, Kehuan Zhang
We propose a kind of brand new attack against face recognition systems, which is realized by illuminating the subject using infrared according to the adversarial examples worked out by our algorithm, thus face recognition systems can be bypassed or misled while simultaneously the infrared perturbations cannot be observed by raw eyes.
Cryptography and Security
no code implementations • 13 Feb 2018 • Di Tang, XiaoFeng Wang, Kehuan Zhang
To launch black-box attacks against a Deep Neural Network (DNN) based Face Recognition (FR) system, one needs to build \textit{substitute} models to simulate the target model, so the adversarial examples discovered from substitute models could also mislead the target model.
no code implementations • 6 Jan 2018 • Di Tang, Zhe Zhou, Yinqian Zhang, Kehuan Zhang
The overall accuracy of our liveness detection system is 98. 8\%, and its robustness was evaluated in different scenarios.