no code implementations • 4 May 2024 • Zehan Zhu, Yan Huang, Xin Wang, Jinming Xu
In this paper, we propose a differentially private decentralized learning method (termed PrivSGP-VR) which employs stochastic gradient push with variance reduction and guarantees $(\epsilon, \delta)$-differential privacy (DP) for each node.
no code implementations • 21 Jul 2023 • Zehan Zhu, Ye Tian, Yan Huang, Jinming Xu, Shibo He
Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers.
no code implementations • 8 Jul 2022 • Yan Huang, Ying Sun, Zehan Zhu, Changzhi Yan, Jinming Xu
We develop a general framework unifying several gradient-based stochastic optimization methods for empirical risk minimization problems both in centralized and distributed scenarios.