no code implementations • 3 Apr 2024 • Gabriel Loaiza-Ganem, Brendan Leigh Ross, Rasa Hosseinzadeh, Anthony L. Caterini, Jesse C. Cresswell
This manifold lens provides both clarity as to why some DGMs (e. g. diffusion models and some generative adversarial networks) empirically surpass others (e. g. likelihood-based models such as variational autoencoders, normalizing flows, or energy-based models) at sample generation, and guidance for devising more performant DGMs.
1 code implementation • 27 Mar 2024 • Hamidreza Kamkari, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini, Rahul G. Krishnan, Gabriel Loaiza-Ganem
We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM.
2 code implementations • NeurIPS 2023 • George Stein, Jesse C. Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Leigh Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L. Caterini, J. Eric T. Taylor, Gabriel Loaiza-Ganem
Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID.
1 code implementation • 30 Nov 2022 • Gabriel Loaiza-Ganem, Brendan Leigh Ross, Luhuan Wu, John P. Cunningham, Jesse C. Cresswell, Anthony L. Caterini
Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure.
no code implementations • 23 Nov 2022 • Bradley C. A. Brown, Jordan Juravsky, Anthony L. Caterini, Gabriel Loaiza-Ganem
Given a pair of models with similar training set performance, it is natural to assume that the model that possesses simpler internal representations would exhibit better generalization.
no code implementations • 23 Nov 2022 • Jesse C. Cresswell, Brendan Leigh Ross, Gabriel Loaiza-Ganem, Humberto Reyes-Gonzalez, Marco Letizia, Anthony L. Caterini
Precision measurements and new physics searches at the Large Hadron Collider require efficient simulations of particle propagation and interactions within the detectors.
1 code implementation • 6 Jul 2022 • Bradley C. A. Brown, Anthony L. Caterini, Brendan Leigh Ross, Jesse C. Cresswell, Gabriel Loaiza-Ganem
Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation.
1 code implementation • 22 Jun 2022 • Brendan Leigh Ross, Gabriel Loaiza-Ganem, Anthony L. Caterini, Jesse C. Cresswell
We then learn the probability density within $\mathcal{M}$ with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold.
2 code implementations • 14 Apr 2022 • Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini
We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting.
no code implementations • NeurIPS Workshop ICBINB 2021 • Anthony L. Caterini, Gabriel Loaiza-Ganem
This analysis provides further explanation for the success of OOD detection methods based on likelihood ratios, as the problematic entropy term cancels out in expectation.
1 code implementation • NeurIPS 2021 • Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, John P. Cunningham
Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood.
no code implementations • ICLR Workshop Neural_Compression 2021 • Adam Golinski, Anthony L. Caterini
Recently, a class of deep generative models known as continuously-indexed flows (CIFs) have expanding the modelling capacity of normalizing flows (NFs) in the context of both density estimation and variational inference.
1 code implementation • ICLR 2021 • Panteha Naderian, Gabriel Loaiza-Ganem, Harry J. Braviner, Anthony L. Caterini, Jesse C. Cresswell, Tong Li, Animesh Garg
In order to address these limitations, we introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.
3 code implementations • ICML 2020 • Rob Cornish, Anthony L. Caterini, George Deligiannidis, Arnaud Doucet
We show that normalising flows become pathological when used to model targets whose supports have complicated topologies.
3 code implementations • NeurIPS 2018 • Anthony L. Caterini, Arnaud Doucet, Dino Sejdinovic
However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest.
no code implementations • 15 Aug 2016 • Anthony L. Caterini, Dong Eui Chang
In this paper, a geometric framework for neural networks is proposed.