no code implementations • 23 Apr 2023 • Vasileios Charisopoulos, Hossein Esfandiari, Vahab Mirrokni
In this paper, we study the stochastic linear bandit problem under the additional requirements of differential privacy, robustness and batched observations.
no code implementations • 21 Oct 2022 • Hossein Esfandiari, Vahab Mirrokni, Jon Schneider
In this work, we present and study a new framework for online learning in systems with multiple users that provide user anonymity.
no code implementations • 4 Oct 2022 • Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas
Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.
no code implementations • 13 Jul 2022 • Alessandro Epasto, Hossein Esfandiari, Vahab Mirrokni, Andres Munoz Medina
When working with user data providing well-defined privacy guarantees is paramount.
1 code implementation • 20 May 2022 • Mehran Kazemi, Anton Tsitsulin, Hossein Esfandiari, Mohammadhossein Bateni, Deepak Ramachandran, Bryan Perozzi, Vahab Mirrokni
Representative Selection (RS) is the problem of finding a small subset of exemplars from a dataset that is representative of the dataset.
no code implementations • 11 Apr 2022 • Vincent Cohen-Addad, Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean $k$-median and $k$-means problems.
no code implementations • 22 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
In particular, we provide a nearly optimal trade-off between the number of users and the number of samples per user required for private mean estimation, even when the number of users is as low as $O(\frac{1}{\varepsilon}\log\frac{1}{\delta})$.
no code implementations • 5 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Umar Syed, Sergei Vassilvitskii
We present new mechanisms for \emph{label differential privacy}, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set.
no code implementations • 5 Jul 2021 • Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, Qian Yu
First, we show that it is not possible to provide an $n^{1/\log\log n}$-approximation algorithm for this problem unless the exponential time hypothesis fails.
no code implementations • 1 Jul 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
Next, we study the $k$-means problem in this context and provide an $O(k \log k)$-approximation algorithm for explainable $k$-means, improving over the $O(k^2)$ bound of Dasgupta et al. and the $O(d k \log k)$ bound of \cite{laber2021explainable}.
1 code implementation • NeurIPS 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.
1 code implementation • 20 Feb 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.
no code implementations • 9 Nov 2019 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni
We propose an efficient semi adaptive policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $1-1/e-\epsilon$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential manner.
no code implementations • NeurIPS 2019 • Lin Chen, Hossein Esfandiari, Thomas Fu, Vahab S. Mirrokni
In this paper, we aim to develop LSH schemes for distance functions that measure the distance between two probability distributions, particularly for f-divergences as well as a generalization to capture mutual information loss.
no code implementations • 11 Oct 2019 • Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, Vahab Mirrokni
We present simple and efficient algorithms for the batched stochastic multi-armed bandit and batched stochastic linear bandit problems.
1 code implementation • 10 May 2019 • Dean Eckles, Hossein Esfandiari, Elchanan Mossel, M. Amin Rahimian
We study the task of selecting $k$ nodes, in a social network of size $n$, to seed a diffusion with maximum expected spread size, under the independent cascade model with cascade probability $p$.
Social and Information Networks Computational Complexity Probability Physics and Society
no code implementations • 30 Apr 2019 • Mohammadhossein Bateni, Lin Chen, Hossein Esfandiari, Thomas Fu, Vahab S. Mirrokni, Afshin Rostamizadeh
To achieve this, we introduce a novel re-parametrization of the mutual information objective, which we prove is submodular, and design a data structure to query the submodular function in amortized $O(\log n )$ time (where $n$ is the input vocabulary size).
no code implementations • 30 Jan 2019 • Hossein Esfandiari, Mohammadtaghi Hajiaghayi, Brendan Lucier, Michael Mitzenmacher
We consider online variations of the Pandora's box problem (Weitzman.
no code implementations • ICML 2018 • Hossein Esfandiari, Silvio Lattanzi, Vahab Mirrokni
The $k$-core decomposition is a fundamental primitive in many machine learning and data mining applications.
no code implementations • NeurIPS 2016 • Hossein Esfandiari, Nitish Korula, Vahab Mirrokni
In particular, in online advertising it is fairly common to optimize multiple metrics, such as clicks, conversions, and impressions, as well as other metrics which may be largely uncorrelated such as ‘share of voice’, and ‘buyer surplus’.