no code implementations • EMNLP 2021 • Jieren Deng, Chenghong Wang, Xianrui Meng, Yijue Wang, Ji Li, Sheng Lin, Shuo Han, Fei Miao, Sanguthevar Rajasekaran, Caiwen Ding
In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks.
no code implementations • ACL 2022 • Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding
Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.
1 code implementation • Findings (EMNLP) 2021 • Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding
In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.
Federated Learning Cryptography and Security
no code implementations • 14 Sep 2020 • Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran
Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.
no code implementations • 28 Aug 2020 • Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.
1 code implementation • NeurIPS 2019 • Xingyu Cai, Tingyang Xu, Jin-Feng Yi, Junzhou Huang, Sanguthevar Rajasekaran
Dynamic Time Warping (DTW) is widely used as a similarity measure in various domains.
1 code implementation • NeurIPS 2019 • Xia Xiao, Zigeng Wang, Sanguthevar Rajasekaran
Reducing the model redundancy is an important task to deploy complex deep learning models to resource-limited or time-sensitive devices.
no code implementations • ICLR 2018 • Xia Xiao, Sanguthevar Rajasekaran
In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions.