no code implementations • 13 Mar 2024 • Ming Dong, Yujing Chen, Miao Zhang, Hao Sun, Tingting He
We found that by introducing a small number of specific Chinese rich semantic structures, LLMs achieve better performance than the BERT-based model on few-shot CSC task.
no code implementations • 1 Sep 2021 • Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices and minimize the effect on the performance of models in asynchronous FL.
2 code implementations • 5 Jan 2021 • Jinlai Zhang, Lyujie Chen, Bo Ouyang, Binbin Liu, Jihong Zhu, Yujing Chen, Yanmei Meng, Danfeng Wu
As 3D point cloud analysis has received increasing attention, the insufficient scale of point cloud datasets and the weak generalization ability of networks become prominent.
Ranked #3 on 3D Point Cloud Classification on ModelNet40-C
no code implementations • 12 Oct 2020 • Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, Huzefa Rangwala
By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy.
no code implementations • 5 Nov 2019 • Yujing Chen, Yue Ning, Martin Slawski, Huzefa Rangwala
In this paper, we present an Asynchronous Online Federated Learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients.
no code implementations • 13 May 2019 • Yujing Chen, Yue Ning, Zheng Chai, Huzefa Rangwala
The attention mechanism of the proposed model seeks to extract feature representations from the input and learn a shared representation focused on time dimensions across multiple sensors.