no code implementations • 12 Mar 2024 • Ishaq Aden-Ali, Mikael Møller Høgsgaard, Kasper Green Larsen, Nikita Zhivotovskiy
Furthermore, we prove a near-optimal high-probability bound on this algorithm's error.
no code implementations • 18 Apr 2023 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In this paper, we address this issue by providing optimal high probability risk bounds through a framework that surpasses the limitations of uniform convergence arguments.
no code implementations • 19 Dec 2022 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm.
no code implementations • NeurIPS 2021 • Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw
We show that if $\mathcal{F}$ is privately list-decodable, then we can privately learn mixtures of distributions in $\mathcal{F}$.
no code implementations • 19 Oct 2020 • Ishaq Aden-Ali, Hassan Ashtiani, Gautam Kamath
These are the first finite sample upper bounds for general Gaussians which do not impose restrictions on the parameters of the distribution.
no code implementations • 5 Dec 2019 • Ishaq Aden-Ali, Hassan Ashtiani
We show that the sample complexity of learning tree structured SPNs with the usual type of leaves (i. e., Gaussian or discrete) grows at most linearly (up to logarithmic factors) with the number of parameters of the SPN.