no code implementations • ICML 2020 • Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher
A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA).
no code implementations • ICML 2020 • Fabian Latorre, Paul Rolland, Shaul Nadav Hallak, Volkan Cevher
We demonstrate two new important properties of the path-norm regularizer for shallow neural networks.
1 code implementation • 22 Sep 2022 • Paul Rolland, Luca Viano, Norman Schuerhoff, Boris Nikolov, Volkan Cevher
While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior.
no code implementations • 8 Mar 2022 • Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello
This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.
no code implementations • NeurIPS 2021 • Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher
We demonstrate this by deriving an upper bound on the Rademacher Complexity that depends on two key quantities: (i) the intrinsic dimension, which is a measure of isotropy, and (ii) the largest eigenvalue of the second moment (covariance) matrix of the distribution.
no code implementations • 29 Sep 2021 • Paul Rolland, Ali Ramezani-Kebrya, ChaeHwan Song, Fabian Latorre, Volkan Cevher
Despite the non-convex landscape, first-order methods can be shown to reach global minima when training overparameterized neural networks, where the number of parameters far exceed the number of training data.
no code implementations • 2 Jul 2020 • Fabian Latorre, Paul Rolland, Nadav Hallak, Volkan Cevher
We demonstrate two new important properties of the 1-path-norm of shallow neural networks.
no code implementations • ICLR 2020 • Fabian Latorre, Paul Rolland, Volkan Cevher
We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.
1 code implementation • 14 Feb 2020 • Parameswaran Kamalaruban, Yu-Ting Huang, Ya-Ping Hsieh, Paul Rolland, Cheng Shi, Volkan Cevher
We introduce a sampling perspective to tackle the challenging task of training robust Reinforcement Learning (RL) agents.
no code implementations • 11 Dec 2018 • Paul Rolland, Ali Kavis, Alex Immer, Adish Singla, Volkan Cevher
We study the fundamental problem of learning an unknown, smooth probability function via pointwise Bernoulli tests.
no code implementations • NeurIPS 2018 • Ya-Ping Hsieh, Ali Kavis, Paul Rolland, Volkan Cevher
We consider the problem of sampling from constrained distributions, which has posed significant challenges to both non-asymptotic analysis and algorithmic design.
1 code implementation • 20 Feb 2018 • Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher
In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables.