no code implementations • 15 Jun 2023 • Ramnath Kumar, Kushal Majmundar, Dheeraj Nagaraj, Arun Sai Suggala
We present Re-weighted Gradient Descent (RGD), a novel optimization technique that improves the performance of deep neural networks through dynamic sample importance weighting.
no code implementations • 15 Jun 2023 • Shubhada Agrawal, Sandeep Juneja, Karthikeyan Shanmugam, Arun Sai Suggala
Learning paradigms based purely on offline data as well as those based solely on sequential online learning have been well-studied in the literature.
no code implementations • 9 Jun 2023 • Anshul Nasery, Hardik Shah, Arun Sai Suggala, Prateek Jain
Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization.
no code implementations • 30 Jan 2023 • Xiyang Liu, Prateek Jain, Weihao Kong, Sewoong Oh, Arun Sai Suggala
Under label-corruption, this is the first efficient linear regression algorithm to guarantee both $(\varepsilon,\delta)$-DP and robustness.
no code implementations • 17 Jan 2023 • Soumyabrata Pal, Arun Sai Suggala, Karthikeyan Shanmugam, Prateek Jain
Instead, we propose LATTICE (Latent bAndiTs via maTrIx ComplEtion) which allows exploitation of the latent cluster structure to provide the minimax optimal regret of $\widetilde{O}(\sqrt{(\mathsf{M}+\mathsf{N})\mathsf{T}})$, when the number of clusters is $\widetilde{O}(1)$.
1 code implementation • 7 Jun 2022 • Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala
Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks.
1 code implementation • NeurIPS 2021 • Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.
no code implementations • 19 Jun 2020 • Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar
We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.
no code implementations • NeurIPS 2020 • Arun Sai Suggala, Praneeth Netrapalli
For Lipschitz and smooth nonconvex-nonconcave games, our algorithm requires access to an optimization oracle which computes the perturbed best response.
no code implementations • 19 Mar 2019 • Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain
We provide a nearly linear time estimator which consistently estimates the true regression vector, even with $1-o(1)$ fraction of corruptions.
no code implementations • 19 Mar 2019 • Arun Sai Suggala, Praneeth Netrapalli
We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting.
2 code implementations • 27 Jan 2019 • Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, Pradeep Ravikumar
We analyze optimal explanations with respect to both these measures, and while the optimal explanation for sensitivity is a vacuous constant explanation, the optimal explanation for infidelity is a novel combination of two popular explanation methods.
no code implementations • 7 Jun 2018 • Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar
Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy.
no code implementations • 19 Feb 2018 • Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar
We provide a new computationally-efficient class of estimators for risk minimization.
no code implementations • ICML 2017 • Arun Sai Suggala, Eunho Yang, Pradeep Ravikumar
While there have been some work on tractable approximations, these do not come with strong statistical guarantees, and moreover are relatively computationally expensive.
no code implementations • ICML 2017 • Ian En-Hsu Yen, Wei-Cheng Lee, Sung-En Chang, Arun Sai Suggala, Shou-De Lin, Pradeep Ravikumar
The latent feature model (LFM), proposed in (Griffiths \& Ghahramani, 2005), but possibly with earlier origins, is a generalization of a mixture model, where each instance is generated not from a single latent class but from a combination of latent features.
1 code implementation • ICML 2017 • Chirag Gupta, Arun Sai Suggala, Ankit Goyal, Harsha Vardhan Simhadri, Bhargavi Paranjape, Ashish Kumar, Saurabh Goyal, Raghavendra Udupa, Manik Varma, Prateek Jain
Such applications demand prediction models with small storage and computational complexity that do not compromise significantly on accuracy.
1 code implementation • 19 May 2015 • Wesley Tansey, Oscar Hernan Madrid Padilla, Arun Sai Suggala, Pradeep Ravikumar
Specifically, VS-MRFs are the joint graphical model distributions where the node-conditional distributions belong to generic exponential families with general vector space domains.