no code implementations • 10 May 2024 • Omri Ben-Dov, Jake Fawkes, Samira Samadi, Amartya Sanyal
Collective action in Machine Learning is the study of the control that a coordinated group can have over machine learning algorithms.
no code implementations • 19 Mar 2024 • Yaxi Hu, Amartya Sanyal, Bernhard Schölkopf
When analysing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting.
no code implementations • 26 Feb 2024 • Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal
In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms.
1 code implementation • 21 Feb 2024 • Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, Amartya Sanyal
We hope our work spurs research towards developing better methods for corrective unlearning and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training.
no code implementations • NeurIPS 2023 • Alexandru Ţifrea, Gizem Yüce, Amartya Sanyal, Fanny Yang
Prior works have shown that semi-supervised learning algorithms can leverage unlabeled data to improve over the labeled sample complexity of supervised learning (SL) algorithms.
no code implementations • 12 Jun 2023 • Piersilvio De Bartolomeis, Jacob Clarysse, Amartya Sanyal, Fanny Yang
In this paper, we systematically compare the standard and robust error of these two robust training paradigms across multiple computer vision tasks.
1 code implementation • 6 Jun 2023 • Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal
In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public unlabelled and private labelled data.
no code implementations • 25 Apr 2023 • Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi
Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.
no code implementations • 10 Oct 2022 • Amartya Sanyal, Giorgia Ramponi
Online learning, in the mistake bound model, is one of the most fundamental concepts in learning theory.
no code implementations • 8 Jul 2022 • Daniel Paleka, Amartya Sanyal
In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy.
no code implementations • 17 Jun 2022 • Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal
As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.
1 code implementation • 16 Jun 2022 • Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr
Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.
no code implementations • 8 Jun 2022 • Amartya Sanyal, Yaxi Hu, Fanny Yang
As machine learning algorithms are deployed on sensitive data in critical decision making processes, it is becoming increasingly important that they are also private and fair.
1 code implementation • 2 Feb 2022 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania
Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.
3 code implementations • 17 Jan 2022 • Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru
Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
no code implementations • 16 Aug 2021 • Amartya Sanyal
Deep learning research has recently witnessed an impressively fast-paced progress in a wide range of tasks including computer vision, natural language processing, and reinforcement learning.
no code implementations • ICLR 2021 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
no code implementations • 8 Jul 2020 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip H. S. Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
1 code implementation • ICLR 2021 • Pau de Jorge, Amartya Sanyal, Harkirat S. Behl, Philip H. S. Torr, Gregory Rogez, Puneet K. Dokania
Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance.
3 code implementations • NeurIPS 2020 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H. S. Torr, Puneet K. Dokania
To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function.
no code implementations • 25 Sep 2019 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania
When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks.
no code implementations • ICLR 2020 • Amartya Sanyal, Philip H. S. Torr, Puneet K. Dokania
Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator).
Ranked #123 on Image Generation on CIFAR-10
1 code implementation • ICML 2018 • Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade
The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data.
no code implementations • ICLR 2019 • Amartya Sanyal, Varun Kanade, Philip H. S. Torr, Puneet K. Dokania
To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN.
no code implementations • 31 Jan 2018 • Amartya Sanyal, Pawan Kumar, Purushottam Kar, Sanjay Chawla, Fabrizio Sebastiani
We present a class of algorithms capable of directly training deep neural networks with respect to large families of task-specific performance measures such as the F-measure and the Kullback-Leibler divergence that are structured and non-decomposable.
no code implementations • 5 Jul 2017 • Amartya Sanyal, Sanjana Garg, Asim Unmesh
Understanding the evolution of human society, as a complex adaptive system, is a task that has been looked upon from various angles.
no code implementations • 3 Jul 2017 • Bart van Merriënboer, Amartya Sanyal, Hugo Larochelle, Yoshua Bengio
We propose a generalization of neural network sequence models.