no code implementations • 10 Jun 2020 • Chris Finlay, Augusto Gerolin, Adam M. Oberman, Aram-Alexandre Pooladian
We approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropy-regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions.
1 code implementation • 10 Jun 2020 • Ryan Campbell, Chris Finlay, Adam M. Oberman
We present a deterministic method to compute the Gaussian average of neural networks used in regression and classification.
2 code implementations • ICML 2020 • Chris Finlay, Jörn-Henrik Jacobsen, Levon Nurbekyan, Adam M. Oberman
Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values.
Ranked #1 on Density Estimation on CelebA-HQ 256x256
1 code implementation • 5 Dec 2019 • Levon Nurbekyan, Alexander Iannantuono, Adam M. Oberman
Transportation maps between probability measures are critical objects in numerous areas of mathematics and applications such as PDE, fluid mechanics, geometry, machine learning, computer science, and economics.
Optimization and Control 49M27,
1 code implementation • 4 Oct 2019 • Aram-Alexandre Pooladian, Chris Finlay, Adam M. Oberman
Successfully training deep neural networks often requires either batch normalization, appropriate weight initialization, both of which come with their own challenges.
no code implementations • 3 Oct 2019 • Adam M. Oberman
This article is an overview of supervised machine learning problems for regression and classification.
no code implementations • 25 Sep 2019 • Chris Finlay, Adam M. Oberman
It is well-known that the softmax values of the network are not estimates of the probabilities of class labels.
1 code implementation • 27 May 2019 • Chris Finlay, Adam M. Oberman
In this work we revisit gradient regularization for adversarial robustness with some new ingredients.
no code implementations • ICLR 2019 • Adam M. Oberman, Jeff Calder
We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize.
1 code implementation • ICCV 2019 • Chris Finlay, Aram-Alexandre Pooladian, Adam M. Oberman
Adversarial attacks formally correspond to an optimization problem: find a minimum norm image perturbation, constrained to cause misclassification.
1 code implementation • 21 Mar 2019 • Adam M. Oberman, Chris Finlay, Alexander Iannantuono, Tiago Salvador
While the accuracy of modern deep learning models has significantly improved in recent years, the ability of these models to generate uncertainty estimates has not progressed to the same degree.
1 code implementation • 20 Mar 2019 • Adam M. Oberman, Mariana Prazeres
We prove convergence at the rate O(1/k) with a rate constant which can be better than the constant for optimally scheduled SGD.
Optimization and Control
no code implementations • 15 Aug 2016 • Bilal Abbasi, Jeff Calder, Adam M. Oberman
We propose in this paper a fast real-time streaming version of the PDA algorithm for anomaly detection that exploits the computational advantages of PDE continuum limits.