no code implementations • 23 May 2024 • Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, Rémi Gribonval
Analyzing the behavior of ReLU neural networks often hinges on understanding the relationships between their parameters and the functions they implement.
1 code implementation • 21 May 2024 • Sibylle Marcotte, Rémi Gribonval, Gabriel Peyré
Conservation laws are well-established in the context of Euclidean gradient flow dynamics, notably for linear or ReLU neural network training.
no code implementations • 15 Dec 2023 • Ayoub Belhadji, Rémi Gribonval
In this work, we undertake a scrutinized examination of CL-OMPR to circumvent its limitations.
no code implementations • 9 Dec 2023 • Ayoub Belhadji, Rémi Gribonval
In the context of sketching for compressive mixture modeling, we revisit existing proofs of the Restricted Isometry Property of sketching operators with respect to certain mixtures models.
1 code implementation • 8 Nov 2023 • Titouan Vayer, Etienne Lasalle, Rémi Gribonval, Paulo Gonçalves
We consider the problem of learning a graph modeling the statistical relations of the $d$ variables from a dataset with $n$ samples $X \in \mathbb{R}^{n \times d}$.
1 code implementation • 2 Oct 2023 • Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, Rémi Gribonval
The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet.
no code implementations • 26 Jun 2023 • Clément Lalanne, Aurélien Garivier, Rémi Gribonval
We recover the result of Barber \& Duchi (2014) stating that histogram estimators are optimal against Lipschitz distributions for the L2 risk, and under regular differential privacy, and we extend it to other norms and notions of privacy.
no code implementations • 5 Jun 2023 • Quoc-Tung Le, Elisa Riccietti, Rémi Gribonval
Then, the existence of a global optimum is proved for every concrete optimization problem involving a shallow sparse ReLU neural network of output dimension one.
no code implementations • 14 Feb 2023 • Clément Lalanne, Aurélien Garivier, Rémi Gribonval
The first one consists in privately estimating the empirical quantiles of the samples and using this result as an estimator of the quantiles of the distribution.
no code implementations • 5 Oct 2022 • Clément Lalanne, Aurélien Garivier, Rémi Gribonval
In certain scenarios, we show that maintaining privacy results in a noticeable reduction in performance only when the level of privacy protection is very high.
1 code implementation • 28 Jul 2022 • Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval
We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations.
no code implementations • 13 Jun 2022 • Luc Giffon, Rémi Gribonval
We explore the use of Optical Processing Units (OPU) to compute random Fourier features for sketching, and adapt the overall compressive clustering pipeline to this setting.
no code implementations • 24 May 2022 • Antoine Gonon, Nicolas Brisebarre, Rémi Gribonval, Elisa Riccietti
This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results.
no code implementations • 15 Feb 2022 • Clément Sébastien Lalanne, Clément Gastaud, Nicolas Grislain, Aurélien Garivier, Rémi Gribonval
We consider the differentially private estimation of multiple quantiles (MQ) of a distribution from a dataset, a key building block in modern data analysis.
no code implementations • 7 Dec 2021 • Yann Traonmilin, Rémi Gribonval, Samuel Vaiter
To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint.
no code implementations • 1 Dec 2021 • Titouan Vayer, Rémi Gribonval
Based on the relations between the MMD and the Wasserstein distances, we provide guarantees for compressive statistical learning by introducing and studying the concept of Wasserstein regularity of the learning task, that is when some task-specific metric between probability distributions can be bounded by a Wasserstein distance.
1 code implementation • 4 Oct 2021 • Léon Zheng, Elisa Riccietti, Rémi Gribonval
Our main contribution is to prove that any $N \times N$ matrix having the so-called butterfly structure admits an essentially unique factorization into $J$ butterfly factors (where $N = 2^{J}$), and that the factors can be recovered by a hierarchical factorization method, which consists in recursively factorizing the considered matrix into two factors.
no code implementations • 4 Oct 2021 • Léon Zheng, Elisa Riccietti, Rémi Gribonval
In particular, in the case of fixed-support sparse matrix factorization, we give a general sufficient condition for identifiability based on rank-one matrix completability, and we derive from it a completion algorithm that can verify if this sufficient condition is satisfied, and recover the entries in the two sparse factors if this is the case.
no code implementations • 20 Sep 2021 • Barbara Pascal, Patrice Abry, Nelly Pustelnik, Stéphane G. Roux, Rémi Gribonval, Patrick Flandrin
The present work aims to overcome these limitations by carefully crafting a functional permitting to estimate jointly, in a single step, the reproduction number and outliers defined to model low quality data.
no code implementations • 20 Jul 2021 • Pierre Stock, Rémi Gribonval
The overall objective of this paper is to introduce an embedding for ReLU neural networks of any depth, $\Phi(\theta)$, that is invariant to scalings and that provides a locally linear parameterization of the realization of the network.
1 code implementation • 29 Apr 2021 • Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, Pierre Borgnat, Paulo Gonçalves
Diffusing a graph signal at multiple scales requires computing the action of the exponential of several multiples of the Laplacian matrix.
2 code implementations • 5 Jan 2021 • Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval, Nicolas Courty
We notably argue that the minibatch strategy comes with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with limits: the minibatch OT is not a distance.
no code implementations • 4 Aug 2020 • Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.
no code implementations • 17 Apr 2020 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
1 code implementation • 4 Nov 2019 • Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić
Our method simplifies the calibration of optical transmission matrices from a quadratic to a linear inverse problem by first recovering the phase of the measurements.
3 code implementations • 9 Oct 2019 • Kilian Fatras, Younes Zine, Rémi Flamary, Rémi Gribonval, Nicolas Courty
Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning.
3 code implementations • ICLR 2020 • Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou
In this paper, we address the problem of reducing the memory footprint of convolutional network architectures.
1 code implementation • NeurIPS 2019 • Sidharth Gupta, Rémi Gribonval, Laurent Daudet, Ivan Dokmanić
A signal of interest $\mathbf{\xi} \in \mathbb{R}^N$ is mixed by a random scattering medium to compute the projection $\mathbf{y} = \mathbf{A} \mathbf{\xi}$, with $\mathbf{A} \in \mathbb{C}^{M \times N}$ being a realization of a standard complex Gaussian iid random matrix.
no code implementations • 3 May 2019 • Rémi Gribonval, Gitta Kutyniok, Morten Nielsen, Felix Voigtlaender
We study the expressivity of deep neural networks.
1 code implementation • ICLR 2019 • Pierre Stock, Benjamin Graham, Rémi Gribonval, Hervé Jégou
Modern neural networks are over-parametrized.
1 code implementation • 17 Dec 2018 • Cassio Fraga Dantas, Rémi Gribonval
In this paper, we propose a way to combine two acceleration techniques for the $\ell\_{1}$-regularized least squares problem: safe screening tests, which allow to eliminate useless dictionary atoms; and the use of fast structured approximations of the dictionary matrix.
1 code implementation • NeurIPS 2018 • Helena Peic Tukuljac, Antoine Deleforge, Rémi Gribonval
The approach operates directly in the parameter-space of echo locations and weights, and enables near-exact blind and off-grid echo retrieval from discrete-time measurements.
no code implementations • 27 Feb 2018 • Nicolas Keriven, Rémi Gribonval
In this paper, we address the question of information preservation in ill-posed, non-linear inverse problems, assuming that the measured data is close to a low-dimensional model set.
no code implementations • CVPR 2018 • Himalaya Jain, Joaquin Zepeda, Patrick Pérez, Rémi Gribonval
To work at scale, a complete image indexing system comprises two components: An inverted file index to restrict the actual search to only a subset that should contain most of the items relevant to the query; An approximate distance computation mechanism to rapidly scan these lists.
no code implementations • ICCV 2017 • Himalaya Jain, Joaquin Zepeda, Patrick Pérez, Rémi Gribonval
For large-scale visual search, highly compressed yet meaningful representations of images are essential.
no code implementations • 22 Jun 2017 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
no code implementations • 27 Oct 2016 • Nicolas Keriven, Nicolas Tremblay, Yann Traonmilin, Rémi Gribonval
We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset.
no code implementations • 10 Aug 2016 • Himalaya Jain, Patrick Pérez, Rémi Gribonval, Joaquin Zepeda, Hervé Jégou
This paper tackles the task of storing a large collection of vectors, such as visual descriptors, and of searching in it.
no code implementations • 9 Jun 2016 • Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, Patrick Pérez
We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data.
no code implementations • 16 Nov 2015 • Gilles Puy, Nicolas Tremblay, Rémi Gribonval, Pierre Vandergheynst
On the contrary, the second strategy is adaptive but yields optimal results.
no code implementations • 24 Jun 2015 • Luc Le Magoarou, Rémi Gribonval
The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors.
no code implementations • 9 Mar 2015 • Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber
In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure.
no code implementations • 12 Dec 2014 • Antoine Bonnefoy, Valentin Emiya, Liva Ralaivola, Rémi Gribonval
Recent computational strategies based on screening tests have been proposed to accelerate algorithms addressing penalized sparse regression problems such as the Lasso.
no code implementations • 19 Jul 2014 • Rémi Gribonval, Rodolphe Jenatton, Francis Bach
A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary.
no code implementations • 20 Jun 2014 • Luc Le Magoarou, Rémi Gribonval
The resulting dictionary is in general a dense matrix, and its manipulation can be computationally costly both at the learning stage and later in the usage of this dictionary, for tasks such as sparse coding.
no code implementations • 6 Jun 2014 • Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber
The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields.
no code implementations • 20 Mar 2014 • Matthias Seibert, Martin Kleinsteuber, Rémi Gribonval, Rodolphe Jenatton, Francis Bach
The main goal of this paper is to provide a sample complexity estimate that controls to what extent the empirical average deviates from the cost function.
no code implementations • 17 Mar 2014 • Cagdas Bilen, Gilles Puy, Rémi Gribonval, Laurent Daudet
We investigate the methods that simultaneously enforce sparsity and low-rank structure in a matrix as often employed for sparse phase retrieval problems or phase calibration problems in compressive sensing.
no code implementations • 13 Dec 2013 • Rémi Gribonval, Rodolphe Jenatton, Francis Bach, Martin Kleinsteuber, Matthias Seibert
Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), $K$-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection.
1 code implementation • 19 Dec 2009 • David K Hammond, Pierre Vandergheynst, Rémi Gribonval
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph.
Functional Analysis Information Theory Information Theory 42C40; 65T90