no code implementations • 27 Feb 2024 • Taisuke Yasuda, Kyriakos Axiotis, Gang Fu, Mohammadhossein Bateni, Vahab Mirrokni
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, and generalizable models.
no code implementations • 14 Jul 2023 • Kyriakos Axiotis, Taisuke Yasuda
We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features.
no code implementations • 1 Jun 2023 • David P. Woodruff, Taisuke Yasuda
In this work, we show the first bounds for sensitivity sampling for $\ell_p$ subspace embeddings for $p > 2$ that improve over the general $\mathfrak S d$ bound, achieving a bound of roughly $\mathfrak S^{2-2/p}$ for $2<p<\infty$.
1 code implementation • 29 Sep 2022 • Taisuke Yasuda, Mohammadhossein Bateni, Lin Chen, Matthew Fahrbach, Gang Fu, Vahab Mirrokni
Feature selection is the problem of selecting a subset of features for a machine learning model that maximizes model quality subject to a budget constraint.
no code implementations • 17 Jul 2022 • David P. Woodruff, Taisuke Yasuda
Towards our result, we give the first analysis of "one-shot'' Lewis weight sampling of sampling rows proportionally to their Lewis weights, with sample complexity $\tilde O(d^{p/2}/\epsilon^2)$ for $p>2$.
no code implementations • 9 Nov 2021 • Cameron Musco, Christopher Musco, David P. Woodruff, Taisuke Yasuda
By combining this with our techniques for $\ell_p$ regression, we obtain an active regression algorithm making $\tilde O(d^{1+\max\{1, p/2\}}/\mathrm{poly}(\epsilon))$ queries for such loss functions, including the Tukey and Huber losses, answering another question of [CD21].
no code implementations • 15 May 2019 • Manuel Fernandez, David P. Woodruff, Taisuke Yasuda
We present tight lower bounds on the number of kernel evaluations required to approximately solve kernel ridge regression (KRR) and kernel $k$-means clustering (KKMC) on $n$ input points.