no code implementations • 26 Oct 2023 • Joseph Goodier, Neill D. F. Campbell
We present results that are comparable to state-of-the-art Out-of-Distribution detection methods with generative models.
no code implementations • 27 Sep 2023 • Xuanlong Yu, Yi Zuo, Zitao Wang, Xiaowen Zhang, Jiaxuan Zhao, Yuting Yang, Licheng Jiao, Rui Peng, Xinyi Wang, Junpei Zhang, Kexin Zhang, Fang Liu, Roberto Alcover-Couso, Juan C. SanMiguel, Marcos Escudero-Viñolo, Hanlin Tian, Kenta Matsui, Tianhao Wang, Fahmy Adan, Zhitong Gao, Xuming He, Quentin Bouniot, Hossein Moghaddam, Shyam Nandan Rai, Fabio Cermelli, Carlo Masone, Andrea Pilzer, Elisa Ricci, Andrei Bursuc, Arno Solin, Martin Trapp, Rui Li, Angela Yao, Wenlong Chen, Ivor Simpson, Neill D. F. Campbell, Gianni Franchi
This paper outlines the winning solutions employed in addressing the MUAD uncertainty quantification challenge held at ICCV 2023.
no code implementations • 26 Oct 2022 • Margaret Duff, Ivor J. A. Simpson, Matthias J. Ehrhardt, Neill D. F. Campbell
The covariance can model changing uncertainty dependencies caused by structure in the image, such as edges or objects, and provides a new distance metric from the manifold of learned images.
1 code implementation • 20 Oct 2022 • Cangxiong Chen, Neill D. F. Campbell
As a result, we are able to partially attribute the leakage of the training data in a deep network to its architecture.
no code implementations • CVPR 2022 • Ivor J. A. Simpson, Sara Vicente, Neill D. F. Campbell
Similarly to distillation approaches, our single network is trained to maximise the probability of samples from pre-trained probabilistic models, in this work we use a fixed ensemble of networks.
no code implementations • pproximateinference AABI Symposium 2022 • David Lopes Fernandes, Francisco Vargas, Carl Henrik Ek, Neill D. F. Campbell
We present a variational inference scheme to learn a model that solves the Schrödinger Bridge Problem (SBP).
1 code implementation • 19 Nov 2021 • Cangxiong Chen, Neill D. F. Campbell
Based on this formulation, we are able to attribute the potential leakage of the training data in a deep network to its architecture.
no code implementations • 29 Oct 2021 • Olga Mikheeva, Ieva Kazlauskaite, Adam Hartshorne, Hedvig Kjellström, Carl Henrik Ek, Neill D. F. Campbell
Building on the previous work by Kazlauskaiteet al. [2019], we include a separate monotonic warp of the input data to model temporal misalignment.
no code implementations • 22 Jul 2021 • Margaret Duff, Neill D. F. Campbell, Matthias J. Ehrhardt
The success of generative regularisers depends on the quality of the generative model and so we propose a set of desired criteria to assess generative models and guide future research.
1 code implementation • 26 Oct 2020 • Erik Bodin, Zhenwen Dai, Neill D. F. Campbell, Carl Henrik Ek
We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop.
no code implementations • CVPR 2018 • Michael Firman, Neill D. F. Campbell, Lourdes Agapito, Gabriel J. Brostow
For a single input, we learn to predict a range of possible answers.
1 code implementation • 17 Sep 2019 • Ivan Ustyuzhaninov, Ieva Kazlauskaite, Markus Kaiser, Erik Bodin, Neill D. F. Campbell, Carl Henrik Ek
Similarly, deep Gaussian processes (DGPs) should allow us to compute a posterior distribution of compositions of multiple functions giving rise to the observations.
no code implementations • ICML 2020 • Erik Bodin, Markus Kaiser, Ieva Kazlauskaite, Zhenwen Dai, Neill D. F. Campbell, Carl Henrik Ek
Bayesian optimization (BO) methods often rely on the assumption that the objective function is well-behaved, but in practice, this is seldom true for real-world objectives even if noise-free observations can be collected.
1 code implementation • 30 May 2019 • Ivan Ustyuzhaninov, Ieva Kazlauskaite, Carl Henrik Ek, Neill D. F. Campbell
We propose a new framework for imposing monotonicity constraints in a Bayesian nonparametric setting based on numerical solutions of stochastic differential equations.
1 code implementation • 13 Dec 2018 • Alessandro Di Martino, Erik Bodin, Carl Henrik Ek, Neill D. F. Campbell
The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking.
no code implementations • CVPR 2020 • Garoe Dorta, Sara Vicente, Neill D. F. Campbell, Ivor J. A. Simpson
Deep neural networks have recently been used to edit images with great success, in particular for faces.
no code implementations • 26 Nov 2018 • Ieva Kazlauskaite, Ivan Ustyuzhaninov, Carl Henrik Ek, Neill D. F. Campbell
We present a probabilistic model for unsupervised alignment of high-dimensional time-warped sequences based on the Dirichlet Process Mixture Model (DPMM).
no code implementations • 12 Jul 2018 • Andrew R. Lawrence, Carl Henrik Ek, Neill D. F. Campbell
We present a non-parametric Bayesian latent variable model capable of learning dependency structures across dimensions in a multivariate setting.
2 code implementations • 3 Apr 2018 • Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill D. F. Campbell, Ivor Simpson
This paper demonstrates a novel scheme to incorporate a structured Gaussian likelihood prediction network within the VAE that allows the residual correlations to be modeled.
1 code implementation • 7 Mar 2018 • Ieva Kazlauskaite, Carl Henrik Ek, Neill D. F. Campbell
We present a model that can automatically learn alignments between high-dimensional data in an unsupervised manner.
2 code implementations • CVPR 2018 • Garoe Dorta, Sara Vicente, Lourdes Agapito, Neill D. F. Campbell, Ivor Simpson
This paper is the first work to propose a network to predict a structured uncertainty distribution for a synthesized image.
no code implementations • 18 Dec 2017 • Erik Bodin, Iman Malik, Carl Henrik Ek, Neill D. F. Campbell
We would like to learn latent representations that are low-dimensional and highly interpretable.
no code implementations • 18 Jul 2017 • Erik Bodin, Neill D. F. Campbell, Carl Henrik Ek
We introduce Latent Gaussian Process Regression which is a latent variable extension allowing modelling of non-stationary multi-modal processes using GPs.
no code implementations • ICCV 2015 • Rui Yu, Chris Russell, Neill D. F. Campbell, Lourdes Agapito
In contrast, our method makes use of a single RGB video as input; it can capture the deformations of generic shapes; and the depth estimation is dense, per-pixel and direct.
no code implementations • CVPR 2015 • Daniyar Turmukhambetov, Neill D. F. Campbell, Simon J. D. Prince, Jan Kautz
In this work we remove the image space alignment limitations of existing subspace models by conditioning the models on a shape dependent context that allows for the complex, non-linear structure of the appearance of the visual object to be captured and shared.
no code implementations • CVPR 2014 • Oisin Mac Aodha, Neill D. F. Campbell, Jan Kautz, Gabriel J. Brostow
Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning.
no code implementations • CVPR 2013 • Neill D. F. Campbell, Kartic Subr, Jan Kautz
Conditional Random Fields (CRFs) are used for diverse tasks, ranging from image denoising to object recognition.