Unbiased Estimation using Underdamped Langevin Dynamics

14 Jun 2022  ·  Hamza Ruzayqat, Neil K. Chada, Ajay Jasra ·

In this work we consider the unbiased estimation of expectations w.r.t.~probability measures that have non-negative Lebesgue density, and which are known point-wise up-to a normalizing constant. We focus upon developing an unbiased method via the underdamped Langevin dynamics, which has proven to be popular of late due to applications in statistics and machine learning. Specifically in continuous-time, the dynamics can be constructed {so that as the time goes to infinity they} admit the probability of interest as a stationary measure. {In many cases, time-discretized versions of the underdamped Langevin dynamics are used in practice which are run only with a fixed number of iterations.} We develop a novel scheme based upon doubly randomized estimation as in \cite{ub_grad,disc_model}, which requires access only to time-discretized versions of the dynamics. {The proposed scheme aims to remove the dicretization bias and the bias resulting from running the dynamics for a finite number of iterations}. We prove, under standard assumptions, that our estimator is of finite variance and either has finite expected cost, or has finite cost with a high probability. To illustrate our theoretical findings we provide numerical experiments which verify our theory, which include challenging examples from Bayesian statistics and statistical physics.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here