no code implementations • 19 Jun 2022 • Peter Hayes, Mingtian Zhang, Raza Habib, Jordan Burgess, Emine Yilmaz, David Barber
We introduce a label model that can learn to aggregate weak supervision sources differently for different datapoints and takes into consideration the performance of the end-model during training.
no code implementations • 24 Sep 2021 • Emine Yilmaz, Peter Hayes, Raza Habib, Jordan Burgess, David Barber
Labelling data is a major practical bottleneck in training and testing classifiers.
no code implementations • ICLR 2020 • Raza Habib, Soroosh Mariooryad, Matt Shannon, Eric Battenberg, RJ Skerry-Ryan, Daisy Stanton, David Kao, Tom Bagby
We present a novel generative model that combines state-of-the-art neural text-to-speech (TTS) with semi-supervised probabilistic latent variable models.
no code implementations • 27 Jul 2019 • Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution.
1 code implementation • ICLR 2019 • Raza Habib, David Barber
We introduce Auxiliary Variational MCMC, a novel framework for learning MCMC kernels that combines recent advances in variational inference with insights drawn from traditional auxiliary variable MCMC methods such as Hamiltonian Monte Carlo.
no code implementations • 21 Nov 2018 • Mingtian Zhang, Peter Hayes, Tom Bird, Raza Habib, David Barber
For distributions $\mathbb{P}$ and $\mathbb{Q}$ with different supports or undefined densities, the divergence $\textrm{D}(\mathbb{P}||\mathbb{Q})$ may not exist.
no code implementations • 27 Sep 2018 • Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution.