1 code implementation • 8 May 2024 • Jabari Hastings, Christopher Jung, Charlotte Peale, Vasilis Syrgkanis
A rich line of recent work has studied distributionally robust learning approaches that seek to learn a hypothesis that performs well, in the worst-case, on many different distributions over a population.
no code implementations • 1 May 2024 • Lunjia Hu, Charlotte Peale, Judy Hanwen Shen
To address the shortcomings of real-world datasets, robust learning algorithms have been designed to overcome arbitrary and indiscriminate data corruption.
no code implementations • 16 Nov 2022 • Lunjia Hu, Charlotte Peale
We show that the sample complexity of comparative learning is characterized by the mutual VC dimension $\mathsf{VC}(S, B)$ which we define to be the maximum size of a subset shattered by both $S$ and $B$.
no code implementations • 9 Mar 2022 • Lunjia Hu, Charlotte Peale, Omer Reingold
In this setting, we show that the sample complexity of outcome indistinguishability is characterized by the fat-shattering dimension of $D$.