no code implementations • 27 Jun 2023 • Robi Bhattacharjee, Alexander Cloninger, Yoav Freund, Andreas Oslandsbotn
One attractive application of ER is to point clouds, i. e. graphs whose vertices correspond to IID samples from a distribution over a metric space.
no code implementations • 25 Feb 2023 • Robi Bhattacharjee, Sanjoy Dasgupta, Kamalika Chaudhuri
There has been some recent interest in detecting and addressing memorization of training data by deep neural networks.
no code implementations • 2 Oct 2022 • Robi Bhattacharjee, Max Hopkins, Akash Kumar, Hantao Yu, Kamalika Chaudhuri
Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need.
no code implementations • 28 Feb 2022 • Robi Bhattacharjee, Alex Cloninger, Yoav Freund, Andreas Oslandsbotn
Effective resistance (ER) is an attractive way to interrogate the structure of graphs.
no code implementations • 9 Feb 2022 • Harrison Rosenberg, Robi Bhattacharjee, Kassem Fawaz, Somesh Jha
Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.
no code implementations • 11 Jan 2022 • Robi Bhattacharjee, Gaurav Mahajan
We consider a lifelong learning scenario in which a learner faces a neverending and arbitrary stream of facts and has to decide which ones to retain in its limited memory.
no code implementations • 18 Feb 2021 • Robi Bhattacharjee, Jacob Imola, Michal Moshkovitz, Sanjoy Dasgupta
We propose a data parameter, $\Lambda(X)$, such that for any algorithm maintaining $O(k\text{poly}(\log n))$ centers at time $n$, there exists a data stream $X$ for which a loss of $\Omega(\Lambda(X))$ is inevitable.
no code implementations • NeurIPS 2021 • Robi Bhattacharjee, Kamalika Chaudhuri
Learning classifiers that are robust to adversarial examples has received a great deal of recent attention.
no code implementations • 28 Dec 2020 • Robi Bhattacharjee, Michal Moshkovitz
We also prove that if the data is sampled from a ``natural" distribution, such as a mixture of $k$ Gaussians, then the new complexity measure is equal to $O(k^2\log(n))$.
no code implementations • 19 Dec 2020 • Robi Bhattacharjee, Somesh Jha, Kamalika Chaudhuri
This shows that for very well-separated data, convergence rates of $O(\frac{1}{n})$ are achievable, which is not the case otherwise.
no code implementations • ICML 2020 • Robi Bhattacharjee, Kamalika Chaudhuri
A growing body of research has shown that many classifiers are susceptible to {\em{adversarial examples}} -- small strategic modifications to test inputs that lead to misclassification.
no code implementations • 13 Mar 2019 • Robi Bhattacharjee, Sanjoy Dasgupta
We consider the problem of embedding a relation, represented as a directed graph, into Euclidean space.