no code implementations • 12 Nov 2020 • Michael B. Cohen, Aaron Sidford, Kevin Tian
We show that standard extragradient methods (i. e. mirror prox and dual extrapolation) recover optimal accelerated rates for first-order minimization of smooth convex functions.
no code implementations • 3 Nov 2017 • Sébastien Bubeck, Michael B. Cohen, Yuanzhi Li
In (online) learning theory the concepts of sparsity, variance and curvature are well-understood and are routinely used to obtain refined regret and generalization bounds.
no code implementations • 23 Nov 2015 • Michael B. Cohen, Cameron Musco, Christopher Musco
Our method is based on a recursive sampling scheme for computing a representative subset of $A$'s columns, which is then used to find a low-rank approximation.
no code implementations • 8 Jul 2015 • Michael B. Cohen, Jelani Nelson, David P. Woodruff
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$ rows.
no code implementations • 24 Oct 2014 • Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, Madalina Persu
We show how to approximate a data matrix $\mathbf{A}$ with a much smaller sketch $\mathbf{\tilde A}$ that can be used to solve a general class of constrained k-rank approximation problems to within $(1+\epsilon)$ error.
no code implementations • 21 Aug 2014 • Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford
In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.