no code implementations • 23 Dec 2023 • Dong-Jun Han, Seyyedali Hosseinalipour, David J. Love, Mung Chiang, Christopher G. Brinton
While network coverage maps continue to expand, many devices located in remote areas remain unconnected to terrestrial communication infrastructures, preventing them from getting access to the associated data-driven services.
no code implementations • 7 Nov 2023 • Su Wang, Roberto Morabito, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton
Our optimization methodology aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy while minimizing data processing and D2D communication resource consumption subject to realistic constraints on the network topology and device capabilities.
no code implementations • 27 Oct 2023 • Roberto Morabito, Mallik Tatipamula, Sasu Tarkoma, Mung Chiang
The network edge's role in Artificial Intelligence (AI) inference processing is rapidly expanding, driven by a plethora of applications seeking computational advantages.
no code implementations • 22 May 2023 • Zhan-Lun Chang, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton
Our analysis sheds light on the joint impact of device training variables (e. g., number of local gradient descent steps), asynchronous scheduling decisions (i. e., when a device trains a task), and dynamic data drifts on the performance of ML training for different tasks.
no code implementations • 15 Mar 2023 • Su Wang, Seyyedali Hosseinalipour, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Weifeng Su, Mung Chiang
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
no code implementations • 17 Aug 2022 • Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H. Vincent Poor
Symbol detection is a fundamental and challenging problem in modern communication systems, e. g., multiuser multiple-input multiple-output (MIMO) setting.
no code implementations • 4 Aug 2022 • Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Mung Chiang, Christopher G. Brinton
In most of the current literature, FL has been studied for supervised ML tasks, in which edge devices collect labeled data.
no code implementations • 26 Mar 2022 • Bhargav Ganguly, Seyyedali Hosseinalipour, Kwang Taik Kim, Christopher G. Brinton, Vaneet Aggarwal, David J. Love, Mung Chiang
CE-FL also introduces floating aggregation point, where the local models generated at the devices and the servers are aggregated at an edge server, which varies from one model training round to another to cope with the network evolution in terms of data distribution and users' mobility.
no code implementations • 23 Mar 2022 • Hung T. Nguyen, H. Vincent Poor, Mung Chiang
However, existing algorithms face issues with slow convergence and/or robustness of performance due to the considerable heterogeneity of data distribution, computation and communication capability at the edge.
no code implementations • 7 Feb 2022 • Seyyedali Hosseinalipour, Su Wang, Nicolo Michelusi, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Mung Chiang
PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL.
no code implementations • 21 Dec 2021 • Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H. Vincent Poor
To demonstrate the performance of our framework, we combine it with the very recent neural decoders and show improved performance compared to the original models and traditional decoding algorithms on various codes.
no code implementations • 21 Dec 2021 • Hung T. Nguyen, Roberto Morabito, Kwang Taik Kim, Mung Chiang
Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics.
no code implementations • 29 Jun 2021 • Su Wang, Seyyedali Hosseinalipour, Maria Gorlatova, Christopher G. Brinton, Mung Chiang
The presence of time-varying data heterogeneity and computational resource inadequacy among device clusters motivate four key parts of our methodology: (i) stratified UAV swarms of leader, worker, and coordinator UAVs, (ii) hierarchical nested personalized federated learning (HN-PFL), a distributed ML framework for personalized model training across the worker-leader-core network hierarchy, (iii) cooperative UAV resource pooling to address computational inadequacy of devices by conducting model training among the UAV swarms, and (iv) model/concept drift to model time-varying data distributions.
2 code implementations • ICLR 2022 • Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang, Prateek Mittal
We circumvent this challenge by using additional data from proxy distributions learned by advanced generative models.
3 code implementations • ICLR 2021 • Vikash Sehwag, Mung Chiang, Prateek Mittal
We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin.
no code implementations • 4 Jan 2021 • Su Wang, Mengyuan Lee, Seyyedali Hosseinalipour, Roberto Morabito, Mung Chiang, Christopher G. Brinton
The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.
1 code implementation • 19 Oct 2020 • Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein
As a research community, we are still lacking a systematic understanding of the progress on adversarial robustness which often makes it hard to identify the most promising ideas in training robust models.
no code implementations • 26 Jul 2020 • Hung T. Nguyen, Vikash Sehwag, Seyyedali Hosseinalipour, Christopher G. Brinton, Mung Chiang, H. Vincent Poor
In this paper, we propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed.
no code implementations • 24 Jun 2020 • Vikash Sehwag, Rajvardhan Oak, Mung Chiang, Prateek Mittal
With increasing expressive power, deep neural networks have significantly improved the state-of-the-art on image classification datasets, such as ImageNet.
no code implementations • 7 Jun 2020 • Seyyedali Hosseinalipour, Christopher G. Brinton, Vaneet Aggarwal, Huaiyu Dai, Mung Chiang
There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices.
no code implementations • 16 Dec 2019 • Nawanol Theera-Ampornpunt, Shikhar Suryavansh, Sameer Manchanda, Rajesh Panta, Kaustubh Joshi, Mostafa Ammar, Mung Chiang, Saurabh Bagchi
AppStreamer can, therefore, keep only a small part of the files on the device, akin to a "cache", and download the remainder from a cloud storage server or a nearby edge server when it predicts that the app will need them in the near future.
no code implementations • 5 May 2019 • Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal
A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.
no code implementations • ICML 2018 • Andrew S. Lan, Mung Chiang, Christoph Studer
The Rasch model is widely used for item response analysis in applications ranging from recommender systems to psychology, education, and finance.
1 code implementation • 18 Feb 2018 • Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, Prateek Mittal
In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS).
no code implementations • 1 Feb 2018 • Andrew S. Lan, Mung Chiang, Christoph Studer
We showcase the efficacy of our methods and results for a number of synthetic and real-world datasets, which demonstrates that linearized binary regression finds potential use in a variety of inference, estimation, signal processing, and machine learning applications that deal with binary-valued observations or measurements.
1 code implementation • 9 Jan 2018 • Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal, Mung Chiang
Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world.
no code implementations • 27 Jun 2014 • Felix Ming Fai Wong, Zhenming Liu, Mung Chiang
We revisit the problem of predicting directional movements of stock prices based on news articles: here our algorithm uses daily articles from The Wall Street Journal to predict the closing stock prices on the same day.