no code implementations • 9 Apr 2024 • Guangchen Lan, Dong-Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton
Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}(\frac{1}{\sum_{i=1}^{N} \frac{1}{t_{i}}})$, where $t_{i}$ denotes the time consumption in each iteration at the agent $i$, and $t_{\max}$ is the largest one.
1 code implementation • 22 Feb 2024 • Wonjeong Choi, Jungwuk Park, Dong-Jun Han, YoungHyun Park, Jaekyun Moon
In this paper, we propose consistency-guided temperature scaling (CTS), a new temperature scaling strategy that can significantly enhance the OOD calibration performance by providing mutual supervision among data samples in the source domains.
1 code implementation • 5 Feb 2024 • Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis, Seyyedali Hosseinalipour, Christopher G. Brinton
Decentralized Federated Learning (DFL) has received significant recent research attention, capturing settings where both model updates and model aggregations -- the two key FL processes -- are conducted by the clients.
no code implementations • 3 Feb 2024 • Yun-Wei Chu, Dong-Jun Han, Seyyedali Hosseinalipour, Christopher G. Brinton
Most existing federated learning (FL) methodologies have assumed training begins from a randomly initialized model.
no code implementations • 30 Jan 2024 • Liangqi Yuan, Dong-Jun Han, Su Wang, Devesh Upadhyay, Christopher G. Brinton
Multimodal federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.
no code implementations • 15 Jan 2024 • Yun-Wei Chu, Dong-Jun Han, Christopher G. Brinton
Federated learning (FL) is a promising approach for solving multilingual tasks, potentially enabling clients with their own language-specific data to collaboratively construct a high-quality neural machine translation (NMT) model.
no code implementations • 23 Dec 2023 • Dong-Jun Han, Seyyedali Hosseinalipour, David J. Love, Mung Chiang, Christopher G. Brinton
While network coverage maps continue to expand, many devices located in remote areas remain unconnected to terrestrial communication infrastructures, preventing them from getting access to the associated data-driven services.
no code implementations • 27 Oct 2023 • Wenzhi Fang, Dong-Jun Han, Christopher G. Brinton
Hierarchical federated learning (HFL) has demonstrated promising scalability advantages over the traditional "star-topology" architecture-based federated learning (FL).
no code implementations • 10 Oct 2023 • Liangqi Yuan, Dong-Jun Han, Vishnu Pandi Chellapandi, Stanislaw H. Żak, Christopher G. Brinton
Multimodal federated learning (FL) aims to enrich model training in FL settings where devices are collecting measurements across multiple modalities (e. g., sensors measuring pressure, motion, and other types of data).
no code implementations • 8 Jun 2023 • Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference.
no code implementations • 16 Dec 2022 • Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton, Jaekyun Moon
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i. e., to individual clients) and generalization (i. e., to unseen data) properties concurrently.
no code implementations • NeurIPS 2021 • Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon
While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.
no code implementations • 29 Sep 2021 • Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, Jaekyun Moon
Federated learning (FL) operates based on model exchanges between the server and the clients, and suffers from significant communication as well as client-side computation burden.
no code implementations • NeurIPS 2021 • YoungHyun Park, Dong-Jun Han, Do-Yeon Kim, Jun Seo, Jaekyun Moon
Of central issues that may limit a widespread adoption of FL is the significant communication resources required in the exchange of updated model parameters between the server and individual clients over many communication rounds.
no code implementations • 1 Jan 2021 • Dong-Jun Han, Minseok Choi, Jungwuk Park, Jaekyun Moon
Our key idea is to utilize the devices located in the overlapping areas between the coverage of edge servers; in the model-downloading stage, the devices in the overlapping areas receive multiple models from different edge servers, take the average of the received models, and then update the model with their local data.
no code implementations • 1 Jan 2021 • Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon
While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.
no code implementations • 10 Dec 2020 • Beongjun Choi, Jy-yong Sohn, Dong-Jun Han, Jaekyun Moon
Through extensive real-world experiments, we demonstrate that our scheme, using only $20 \sim 30\%$ of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.
no code implementations • NeurIPS 2020 • Jy-yong Sohn, Dong-Jun Han, Beongjun Choi, Jaekyun Moon
Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD.