no code implementations • 26 Apr 2024 • Benjamin Dupuis, Paul Viallard, George Deligiannidis, Umut Simsekli
We propose data-dependent uniform generalization bounds by approaching the problem from a PAC-Bayesian perspective.
1 code implementation • 9 Feb 2024 • Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
Denoising diffusion models have become ubiquitous for generative modeling.
no code implementations • 7 Aug 2023 • Joe Benton, Valentin De Bortoli, Arnaud Doucet, George Deligiannidis
We provide the first convergence bounds which are linear in the data dimension (up to logarithmic factors) assuming only finite second moments of the data distribution.
1 code implementation • 12 Jun 2023 • Guneet S. Dhillon, George Deligiannidis, Tom Rainforth
While conformal predictors reap the benefits of rigorous statistical guarantees on their error frequency, the size of their corresponding prediction sets is critical to their practical utility.
1 code implementation • NeurIPS 2023 • Christopher Williams, Fabian Falck, George Deligiannidis, Chris Holmes, Arnaud Doucet, Saifuddin Syed
U-Nets are a go-to, state-of-the-art neural architecture across numerous tasks for continuous signals on a square such as images and Partial Differential Equations (PDE), however their design and architecture is understudied.
no code implementations • 26 May 2023 • Joe Benton, George Deligiannidis, Arnaud Doucet
Previous work derived bounds on the approximation error of diffusion models under the stochastic sampling regime, given assumptions on the $L^2$ loss.
1 code implementation • 6 Feb 2023 • Benjamin Dupuis, George Deligiannidis, Umut Şimşekli
To achieve this goal, we build up on a classical covering argument in learning theory and introduce a data-dependent fractal dimension.
no code implementations • 19 Jan 2023 • Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts
U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.
1 code implementation • 7 Nov 2022 • Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.
no code implementations • 6 Sep 2022 • Eugenio Clerico, Tyler Farghly, George Deligiannidis, Benjamin Guedj, Arnaud Doucet
We establish disintegrated PAC-Bayesian generalisation bounds for models trained with gradient descent methods or continuous gradient flows.
1 code implementation • 30 Jun 2022 • Amitis Shidani, George Deligiannidis, Arnaud Doucet
We study the ranking problem in generalized linear bandits.
1 code implementation • 30 May 2022 • Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet
We provide the first complete continuous time framework for denoising diffusion models of discrete data.
no code implementations • 2 Mar 2022 • Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet
This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.
1 code implementation • 1 Mar 2022 • Oscar Clivio, Fabian Falck, Brieuc Lehmann, George Deligiannidis, Chris Holmes
We leverage these balancing scores to perform matching for high-dimensional causal inference and call this procedure neural score matching.
1 code implementation • 27 Feb 2022 • Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
We extend the Schr\"odinger bridge framework to conditional simulation.
no code implementations • 1 Dec 2021 • EL Mahdi Khribch, George Deligiannidis, Daniel Paulin
In this paper, we consider sampling from a class of distributions with thin tails supported on $\mathbb{R}^d$ and make two primary contributions.
1 code implementation • 22 Oct 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.
no code implementations • 18 Aug 2021 • George Deligiannidis, Valentin De Bortoli, Arnaud Doucet
We establish the uniform in time stability, w. r. t.
1 code implementation • 17 Jun 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
The limit of infinite width allows for substantial simplifications in the analytical study of over-parameterised neural networks.
no code implementations • NeurIPS 2021 • Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Şimşekli, Lingjiong Zhu
As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure.
1 code implementation • 15 Feb 2021 • Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet
Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.
no code implementations • 24 Oct 2020 • Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau
Deep ResNet architectures have achieved state of the art performance on many tasks.
1 code implementation • NeurIPS 2020 • Umut Şimşekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu
Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge.
3 code implementations • ICML 2020 • Rob Cornish, Anthony L. Caterini, George Deligiannidis, Arnaud Doucet
We show that normalising flows become pathological when used to model targets whose supports have complicated topologies.
no code implementations • 25 Sep 2019 • Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet
We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.
no code implementations • 3 Mar 2019 • Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis
When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.
no code implementations • 5 Feb 2019 • Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob
We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.
1 code implementation • 28 Jan 2019 • Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet
Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.