no code implementations • 1 Mar 2024 • Adiba Orzikulova, Diana A. Vasile, Fahim Kawsar, Chulhong Min
As wearable devices become increasingly miniaturized and powerful, a new opportunity arises for instant and dynamic device-to-device collaboration and human-to-device interaction.
no code implementations • 25 Jan 2024 • Chulhong Min, Juheon Yi, Utku Gunay Acer, Fahim Kawsar
Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis.
no code implementations • 4 Jan 2024 • Chi Ian Tang, Lorena Qendro, Dimitris Spathis, Fahim Kawsar, Akhil Mathur, Cecilia Mascolo
These schemes re-purpose contrastive learning for knowledge retention and, Kaizen combines that with self-training in a unified scheme that can leverage unlabelled and labelled data for continual learning.
no code implementations • 3 Jan 2024 • Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar
Self-supervised learning (SSL) has become the de facto training paradigm of large models where pre-training is followed by supervised fine-tuning using domain-specific data and labels.
no code implementations • 11 Dec 2023 • Taesik Gong, Si Young Jang, Utku Günay Acer, Fahim Kawsar, Chulhong Min
The advent of tiny AI accelerators opens opportunities for deep neural network deployment at the extreme edge, offering reduced latency, lower power cost, and improved privacy in on-device ML inference.
1 code implementation • 20 Oct 2023 • Mohammad Malekzadeh, Fahim Kawsar
In split inference, a deep neural network (DNN) is partitioned to run the early part of the DNN at the edge and the later part of the DNN in the cloud.
no code implementations • 12 Sep 2023 • Dimitris Spathis, Fahim Kawsar
Here, we discuss recent works that employ LLMs for human-centric tasks such as in mobile health sensing and present a case study showing that popular LLMs tokenize temporal data incorrectly.
1 code implementation • 31 Jul 2023 • Shohreh Deldari, Dimitris Spathis, Mohammad Malekzadeh, Fahim Kawsar, Flora Salim, Akhil Mathur
Limited availability of labeled data for machine learning on multimodal time-series extensively hampers progress in the field.
no code implementations • 15 May 2023 • Nicole Lai, Marios Philiastides, Fahim Kawsar, Fani Deligianni
In particular, the direct interaction of auditory with the motor and the reward system via a predictive framework explains the efficacy of music-based interventions in motor rehabilitation.
1 code implementation • 30 Mar 2023 • Chi Ian Tang, Lorena Qendro, Dimitris Spathis, Fahim Kawsar, Cecilia Mascolo, Akhil Mathur
Kaizen is able to balance the trade-off between knowledge retention and learning from new data with an end-to-end model, paving the way for practical deployment of continual learning systems.
no code implementations • 27 Mar 2023 • Sofia Yfantidou, Marios Constantinides, Dimitris Spathis, Athena Vakali, Daniele Quercia, Fahim Kawsar
The field of mobile and wearable computing is undergoing a revolutionary integration of machine learning.
1 code implementation • 8 Nov 2022 • Fan Mo, Mohammad Malekzadeh, Soumyajit Chatterjee, Fahim Kawsar, Akhil Mathur
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
1 code implementation • 23 May 2022 • Ekdeep Singh Lubana, Chi Ian Tang, Fahim Kawsar, Robert P. Dick, Akhil Mathur
Federated learning is generally used in tasks where labels are readily available (e. g., next word prediction).
no code implementations • 17 Feb 2022 • Hyunsung Cho, Akhil Mathur, Fahim Kawsar
Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.
no code implementations • 1 Feb 2022 • Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, Akhil Mathur
In this paper, we extend this line of research and present a novel technique called Collaborative Self-Supervised Learning (ColloSSL) which leverages unlabeled data collected from multiple devices worn by a user to learn high-quality features of the data.
1 code implementation • 19 Jan 2022 • Wiebke Toussaint, Aaron Yi Ding, Fahim Kawsar, Akhil Mathur
Billions of distributed, heterogeneous and resource constrained IoT devices deploy on-device machine learning (ML) for private, fast and offline inference on personal data.
no code implementations • 8 Sep 2021 • Chulhong Min, Akhil Mathur, Utku Gunay Acer, Alessandro Montanari, Fahim Kawsar
We present SensiX++ - a multi-tenant runtime for adaptive model execution with integrated MLOps on edge devices, e. g., a camera, a microphone, or IoT sensors.
no code implementations • 1 Jan 2021 • Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas Donald Lane
Breakthroughs in unsupervised domain adaptation (uDA) have opened up the possibility of adapting models from a label-rich source domain to unlabeled target domains.
no code implementations • 4 Dec 2020 • Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Gunay Acer, Fahim Kawsar
The emergence of multiple sensory devices on or near a human body is uncovering new dynamics of extreme edge computing.
1 code implementation • 6 Sep 2020 • Akhil Mathur, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane
This paper introduces a new dataset, Libri-Adapt, to support unsupervised domain adaptation research on speech recognition models.
no code implementations • 27 Mar 2020 • Akhil Mathur, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane
A major challenge in building systems that combine audio models with commodity microphones is to guarantee their accuracy and robustness in the real-world.
no code implementations • 25 Sep 2019 • Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane
Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios.