1 code implementation • 21 Apr 2024 • Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou
Current research centres on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data.
no code implementations • 26 Dec 2023 • Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue
Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.
no code implementations • 7 Nov 2023 • Huan Tian, Guangsheng Zhang, Bo Liu, Tianqing Zhu, Ming Ding, Wanlei Zhou
It leverages the difference in the predictions from both the original and fairness-enhanced models and exploits the observed prediction gaps as attack clues.
no code implementations • 19 Aug 2023 • Hui Sun, Tianqing Zhu, Wenhan Chang, Wanlei Zhou
Based on the substitution mechanism and fake label, we propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
1 code implementation • 18 Aug 2023 • Penghui Wen, Kun Hu, Wenxi Yue, Sen Zhang, Wanlei Zhou, Zhiyong Wang
Robust audio anti-spoofing has been increasingly challenging due to the recent advancements on deepfake techniques.
no code implementations • 25 Jun 2023 • Huiqiang Chen, Tianqing Zhu, Tao Zhang, Wanlei Zhou, Philip S. Yu
Federated learning (FL) has been a hot topic in recent years.
no code implementations • 24 Jun 2023 • Shuai Zhou, Tianqing Zhu, Dayong Ye, Xin Yu, Wanlei Zhou
Hence, in this paper, we propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
no code implementations • 6 Jun 2023 • Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, Philip S. Yu
Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more.
no code implementations • 2 Jun 2023 • Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou
GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.
no code implementations • 23 Mar 2023 • Huajie Chen, Tianqing Zhu, Yuan Zhao, Bo Liu, Xin Yu, Wanlei Zhou
By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.
no code implementations • 31 Dec 2022 • Yunjiao Lei, Dayong Ye, Sheng Shen, Yulei Sui, Tianqing Zhu, Wanlei Zhou
A large number of studies have focused on these security and privacy problems in reinforcement learning.
no code implementations • 20 Oct 2022 • Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou
As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale.
no code implementations • 28 Sep 2022 • Mengde Han, Tianqing Zhu, Wanlei Zhou
The major challenge is to find a way to guarantee that sensitive personal information is not disclosed while data is published and analyzed.
no code implementations • 22 Mar 2022 • Chi Liu, Huajie Chen, Tianqing Zhu, Jun Zhang, Wanlei Zhou
To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies.
no code implementations • 13 Mar 2022 • Dayong Ye, Tianqing Zhu, Shuai Zhou, Bo Liu, Wanlei Zhou
In launching a contemporary model inversion attack, the strategies discussed are generally based on either predicted confidence score vectors, i. e., black-box attacks, or the parameters of a target model, i. e., white-box attacks.
no code implementations • 13 Mar 2022 • Dayong Ye, Huiqiang Chen, Shuai Zhou, Tianqing Zhu, Wanlei Zhou, Shouling Ji
However, they may not mean that transfer learning models are impervious to model inversion attacks.
no code implementations • 13 Mar 2022 • Dayong Ye, Sheng Shen, Tianqing Zhu, Bo Liu, Wanlei Zhou
The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.
no code implementations • 12 Mar 2021 • Hanyu Xue, Bo Liu, Ming Ding, Tianqing Zhu, Dayong Ye, Li Song, Wanlei Zhou
The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public.
no code implementations • 19 Oct 2020 • Sheng Shen, Tianqing Zhu, Di wu, Wei Wang, Wanlei Zhou
Federated learning is an improved version of distributed machine learning that further offloads operations which would usually be performed by a central server.
Distributed, Parallel, and Cluster Computing
no code implementations • 18 Oct 2020 • Jianchao Lu, Xi Zheng, Tianyi Zhang, Michael Sheng, Chen Wang, Jiong Jin, Shui Yu, Wanlei Zhou
In this paper, we propose a novel driver fatigue detection method by embedding surface electromyography (sEMG) sensors on a steering wheel.
no code implementations • 7 Oct 2020 • Tao Zhang, Tianqing Zhu, Ping Xiong, Huan Huo, Zahir Tari, Wanlei Zhou
In this way, the impact of data correlation is relieved with the proposed feature selection scheme, and moreover, the privacy issue of data correlation in learning is guaranteed.
no code implementations • 25 Sep 2020 • Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu
A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.
no code implementations • 14 Sep 2020 • Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu
Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
no code implementations • 16 Aug 2020 • Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.
no code implementations • 5 Aug 2020 • Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip S. Yu
Artificial Intelligence (AI) has attracted a great deal of attention in recent years.
no code implementations • 4 Nov 2019 • Shigang Liu, Jun Zhang, Yang Xiang, Wanlei Zhou, Dongxi Xiang
However, previous studies usually focused on different classifiers, and overlook the class imbalance problem in real-world biomedical datasets.