no code implementations • 26 Jun 2023 • Kyriakos Axiotis, Maxim Sviridenko
We show that running gradient descent with variable learning rate guarantees loss $f(x) \leq 1. 1 \cdot f(x^*) + \epsilon$ for the logistic regression objective, where the error $\epsilon$ decays exponentially with the number of iterations and polynomially with the magnitude of the entries of an arbitrary fixed solution $x^*$.
no code implementations • 11 Apr 2022 • Kyriakos Axiotis, Maxim Sviridenko
We propose a simple modification to the iterative hard thresholding (IHT) algorithm, which recovers asymptotically sparser solutions as a function of the condition number.
no code implementations • 18 Aug 2021 • Shaunak Mishra, Changwei Hu, Manisha Verma, Kevin Yen, Yifan Hu, Maxim Sviridenko
To realize this opportunity, we propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, (iii) and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
no code implementations • 5 Aug 2021 • Shaunak Mishra, Mikhail Kuznetsov, Gaurav Srivastava, Maxim Sviridenko
Motivated by our observations in logged data on ad image search queries (given ad text), we formulate a keyword extraction problem, where a keyword extracted from the ad text (or its augmented version) serves as the ad image query.
no code implementations • ICLR 2021 • Kyriakos Axiotis, Maxim Sviridenko
We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving $\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A)$ given a convex function $R:\mathbb{R}^{m\times n}\rightarrow \mathbb{R}$ and a parameter $r^*$.
no code implementations • ICML 2020 • Kyriakos Axiotis, Maxim Sviridenko
We present a new Adaptively Regularized Hard Thresholding (ARHT) algorithm that makes significant progress on this problem by bringing the bound down to $\gamma=O(\kappa)$, which has been shown to be tight for a general class of algorithms including LASSO, OMP, and IHT.
no code implementations • 1 Jun 2019 • Robert Busa-Fekete, Krzysztof Dembczynski, Alexander Golovnev, Kalina Jasinska, Mikhail Kuznetsov, Maxim Sviridenko, Chao Xu
First, we show that finding a tree with optimal training cost is NP-complete, nevertheless there are some tractable special cases with either perfect approximation or exact solution that can be obtained in linear time in terms of the number of labels $m$.
no code implementations • 18 Dec 2014 • Edo Liberty, Ram Sriharsha, Maxim Sviridenko
We also show that, experimentally, it is not much worse than k-means++ while operating in a strictly more constrained computational model.