1 code implementation • 13 Oct 2022 • Xi Wang, Tomas Geffner, Justin Domke
Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance.
1 code implementation • 28 Sep 2022 • Tomas Geffner, George Papamakarios, andriy mnih
Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations.
no code implementations • 16 Aug 2022 • Tomas Geffner, Justin Domke
In fact, using our formulation we propose a new method that combines the strengths of previously existing algorithms; it uses underdamped Langevin transitions and powerful augmentations parameterized by a score network.
no code implementations • 8 Mar 2022 • Tomas Geffner, Justin Domke
Hierarchical models represent a challenging setting for inference algorithms.
1 code implementation • 4 Feb 2022 • Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang
Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment and policy making.
no code implementations • 29 Sep 2021 • Tomas Geffner, Emre Kiciman, Angus Lamb, Martin Kukla, Miltiadis Allamanis, Cheng Zhang
Current causal discovery methods either fail to scale, model only limited forms of functional relationships, or cannot handle missing values.
no code implementations • NeurIPS 2021 • Tomas Geffner, Justin Domke
Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this.
no code implementations • pproximateinference AABI Symposium 2021 • Tomas Geffner, Justin Domke
In this paper we empirically evaluate biased methods for alpha-divergence minimization.
no code implementations • 19 Oct 2020 • Tomas Geffner, Justin Domke
In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator.
1 code implementation • NeurIPS 2020 • Tomas Geffner, Justin Domke
Flexible variational distributions improve variational inference but are harder to optimize.
no code implementations • 5 Nov 2019 • Tomas Geffner, Justin Domke
Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.
no code implementations • pproximateinference AABI Symposium 2019 • Tomas Geffner, Justin Domke
Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given.
no code implementations • NeurIPS 2018 • Tomas Geffner, Justin Domke
Variational inference is increasingly being addressed with stochastic optimization.
1 code implementation • 25 Jun 2018 • Tomas Geffner, Hector Geffner
Fully observable non-deterministic (FOND) planning is becoming increasingly important as an approach for computing proper policies in probabilistic planning, extended temporal plans in LTL planning, and general plans in generalized planning.