no code implementations • 29 Jun 2023 • Oren Mangoubi, Nisheeth K. Vishnoi
We present and analyze a complex variant of the Gaussian mechanism and show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $\tilde{O}(\sqrt{kd})$, whenever there is an appropriately large gap between the $k$'th and the $k+1$'th eigenvalues of $M$.
no code implementations • 11 Nov 2022 • Oren Mangoubi, Nisheeth K. Vishnoi
These equations allow us to bound the utility as the square-root of a sum-of-squares of perturbations to the eigenvectors, as opposed to a sum of perturbation bounds obtained via Davis-Kahan-type theorems.
no code implementations • 6 Jul 2022 • Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Guha Thakurta, Nisheeth K. Vishnoi
Consider the following optimization problem: Given $n \times n$ matrices $A$ and $\Lambda$, maximize $\langle A, U\Lambda U^*\rangle$ where $U$ varies over the unitary group $\mathrm{U}(n)$.
no code implementations • 19 Jun 2022 • Oren Mangoubi, Nisheeth K. Vishnoi
Given a Lipschitz or smooth convex function $\, f:K \to \mathbb{R}$ for a bounded polytope $K \subseteq \mathbb{R}^d$ defined by $m$ inequalities, we consider the problem of sampling from the log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to $K$.
no code implementations • 7 Nov 2021 • Oren Mangoubi, Nisheeth K. Vishnoi
For a $d$-dimensional log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a convex body $K$, the problem of outputting samples from a distribution $\nu$ which is $\varepsilon$-close in infinity-distance $\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}|$ to $\pi$ arises in differentially private optimization.
1 code implementation • 16 Apr 2021 • Shijian Li, Oren Mangoubi, Lijie Xu, Tian Guo
Further, we observe that Sync-Switch achieves 3. 8% higher converged accuracy with just 1. 23X the training time compared to training with ASP.
no code implementations • 28 Sep 2020 • Oren Mangoubi, Sushant Sachdeva, Nisheeth K Vishnoi
We present a first-order algorithm for nonconvex-nonconcave min-max optimization problems such as those that arise in training GANs.
no code implementations • 22 Jun 2020 • Oren Mangoubi, Nisheeth K. Vishnoi
We propose an optimization model, the $\varepsilon$-greedy adversarial equilibrium, and show that it can serve as a computationally tractable alternative to the min-max optimization model.
2 code implementations • 22 Jun 2020 • Vijay Keswani, Oren Mangoubi, Sushant Sachdeva, Nisheeth K. Vishnoi
The equilibrium point found by our algorithm depends on the proposal distribution, and when applying our algorithm to train GANs we choose the proposal distribution to be a distribution of stochastic gradients.
no code implementations • 5 May 2019 • Oren Mangoubi, Nisheeth K. Vishnoi
We achieve this improvement by a novel method of computing polytope membership, where one avoids checking inequalities estimated to have a very low probability of being violated.
no code implementations • 22 Feb 2019 • Oren Mangoubi, Nisheeth K. Vishnoi
The Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications.
1 code implementation • NeurIPS 2019 • Holden Lee, Oren Mangoubi, Nisheeth K. Vishnoi
Given a sequence of convex functions $f_0, f_1, \ldots, f_T$, we study the problem of sampling from the Gibbs distribution $\pi_t \propto e^{-\sum_{k=0}^tf_k}$ for each epoch $t$ in an online manner.
1 code implementation • 9 Aug 2018 • Oren Mangoubi, Natesh S. Pillai, Aaron Smith
In this paper, we investigate a different scaling question: does HMC beat RWM for highly $\textit{multimodal}$ targets?
no code implementations • NeurIPS 2018 • Oren Mangoubi, Nisheeth K. Vishnoi
Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from high-dimensional distributions in Statistics and Machine learning.
no code implementations • 7 Nov 2017 • Oren Mangoubi, Nisheeth K. Vishnoi
In this paper we study the more general case when the noise has magnitude $\alpha F(x) + \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm that finds an approximate minimizer of $F$ for this noise model.