1 code implementation • 24 May 2023 • Song Liu, Jiahao Yu, Jack Simons, Mingxuan Yi, Mark Beaumont
Wasserstein Gradient Flow can move particles along a path that minimizes the $f$-divergence between the target and particle distributions.
no code implementations • 12 Oct 2022 • Daniel Ward, Patrick Cannon, Mark Beaumont, Matteo Fasiolo, Sebastian M Schmon
In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models, and consider the implication of a simulation-to-reality gap.
1 code implementation • 10 Oct 2022 • Louis Sharrock, Jack Simons, Song Liu, Mark Beaumont
We embed the model into a sequential training procedure, which guides simulations using the current approximation of the posterior at the observation of interest, thereby reducing the simulation cost.
no code implementations • pproximateinference AABI Symposium 2022 • Jack Simons, Song Liu, Mark Beaumont
In many scientific applications, we do not have explicit access to the likelihood function.