no code implementations • 3 Jun 2024 • Kyurae Kim, Joohwan Ko, Yi-An Ma, Jacob R. Gardner
For these problems, a popular strategy is to employ SGD with doubly stochastic gradients (doubly SGD): the expectations are estimated using the gradient estimator of each component, while the sum is estimated by subsampling over these estimators.
1 code implementation • 27 Feb 2024 • Samuel Gruffaz, Kyurae Kim, Alain Oliviero Durmus, Jacob R. Gardner
In practice, MCMC-SAEM is often run with asymptotically biased MCMC, for which the consequences are theoretically less understood.
no code implementations • 19 Jan 2024 • Joohwan Ko, Kyurae Kim, Woo Chang Kim, Jacob R. Gardner
In fact, recent computational complexity results for BBVI have established that full-rank variational families scale poorly with the dimensionality of the problem compared to e. g. mean-field families.
no code implementations • 27 Jul 2023 • Kyurae Kim, Yian Ma, Jacob R. Gardner
We prove that black-box variational inference (BBVI) with control variates, particularly the sticking-the-landing (STL) estimator, converges at a geometric (traditionally called "linear") rate under perfect variational family specification.
1 code implementation • NeurIPS 2023 • Kaiwen Wu, Kyurae Kim, Roman Garnett, Jacob R. Gardner
A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.
no code implementations • NeurIPS 2023 • Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner
We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.
no code implementations • 18 Mar 2023 • Kyurae Kim, Kaiwen Wu, Jisu Oh, Jacob R. Gardner
Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements.
no code implementations • 7 Dec 2022 • Kyurae Kim, Simon Maskell, Jason F. Ralph
Imaging methods based on array signal processing often require a fixed propagation speed of the medium, or speed of sound (SoS) for methods based on acoustic signals.
1 code implementation • 13 Jun 2022 • Kyurae Kim, Jisu Oh, Jacob R. Gardner, Adji Bousso Dieng, HongSeok Kim
Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior.