no code implementations • 7 Jun 2024 • Nikolaos Tsilivis, Natalie Frank, Nathan Srebro, Julia Kempe
We study the implicit bias of optimization in robust empirical risk minimization (robust ERM) and its connection with robust generalization.
no code implementations • NeurIPS 2023 • Natalie Frank, Jonathan Niles-Weed
We study the consistency of surrogate risks for robust binary classification.
no code implementations • NeurIPS 2021 • Pranjal Awasthi, Natalie Frank, Mehryar Mohri
Adversarial robustness is a critical property in a variety of modern machine learning applications.
no code implementations • NeurIPS 2021 • Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong
We then give a characterization of H-calibration and prove that some surrogate losses are indeed H-calibrated for the adversarial loss, with these hypothesis sets.
no code implementations • 21 Jul 2020 • Pranjal Awasthi, Natalie Frank, Mehryar Mohri
Linear predictors form a rich class of hypotheses used in a variety of learning algorithms.
no code implementations • ICML 2020 • Pranjal Awasthi, Natalie Frank, Mehryar Mohri
We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses with adversarial perturbations measured in $l_r$-norm for an arbitrary $r \geq 1$.