no code implementations • 23 Dec 2023 • Juncheng Jia, Ji Liu, Chendi Zhou, Hao Tian, Mianxiong Dong, Dejing Dou
As the bandwidth between the devices and the server is relatively low, the communication of intermediate data becomes a bottleneck.
no code implementations • 24 Nov 2022 • Ji Liu, Juncheng Jia, Beichen Ma, Chendi Zhou, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
The system model enables a parallel training process of multiple jobs, with a cost model based on the data fairness and the training time of diverse devices during the parallel training process.
no code implementations • 25 Apr 2022 • Hong Zhang, Ji Liu, Juncheng Jia, Yang Zhou, Huaiyu Dai, Dejing Dou
Despite achieving remarkable performance, Federated Learning (FL) suffers from two critical challenges, i. e., limited computational resources and low training efficiency.
no code implementations • 11 Dec 2021 • Chendi Zhou, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
However, the scheduling of devices for multiple jobs with FL remains a critical and open problem.