1 code implementation • 3 Apr 2024 • Vibhor Agarwal, Aravindh Raman, Nishanth Sastry, Ahmed M. Abdelmoniem, Gareth Tyson, Ignacio Castro
Recent work has exploited the conversational context of a post to improve this automatic tagging, e. g. using the replies to a post to help classify if it contains toxic speech.
no code implementations • 8 Feb 2024 • Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth
In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.
no code implementations • 24 Dec 2023 • Sofia Zahri, Hajar Bennouri, Ahmed M. Abdelmoniem
This paper showcases two illustrative scenarios that highlight the potential of federated learning (FL) as a key to delivering efficient and privacy-preserving machine learning within IoT networks.
no code implementations • 7 Aug 2023 • Karan Gadgil, Sukhpal Singh Gill, Ahmed M. Abdelmoniem
Companies across the globe are keen on targeting potential high-value customers in an attempt to expand revenue and this could be achieved only by understanding the customers more.
no code implementations • 7 Aug 2023 • Karan Pardeshi, Sukhpal Singh Gill, Ahmed M. Abdelmoniem
In this paper, our aim is to focus on the second aspect and build a model that predicts future prices with minimal errors.
no code implementations • 19 Jun 2023 • Ahmed M. Abdelmoniem
With mobile, IoT and sensor devices becoming pervasive in our life and recent advances in Edge Computational Intelligence (e. g., Edge AI/ML), it became evident that the traditional methods for training AI/ML models are becoming obsolete, especially with the growing concerns over privacy and security.
1 code implementation • 9 Aug 2022 • Amna Arouj, Ahmed M. Abdelmoniem
To address this issue, we develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.
1 code implementation • 1 Nov 2021 • Ahmed M. Abdelmoniem, Atal Narayan Sahu, Marco Canini, Suhaib A. Fahmy
Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication.
no code implementations • NeurIPS 2021 • Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis
Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.
no code implementations • 15 Feb 2021 • Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, Marco Canini
Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.
1 code implementation • 26 Jan 2021 • Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alouini, Marco Canini
Many proposals exploit the compressibility of the gradients and propose lossy compression techniques to speed up the communication stage of distributed training.
1 code implementation • 19 Nov 2019 • Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis
Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.