Search Results for author: Yann Traonmilin

Found 8 papers, 1 papers with code

Batch-less stochastic gradient descent for compressive learning of deep regularization for image denoising

no code implementations2 Oct 2023 Hui Shi, Yann Traonmilin, J-F Aujol

Thanks to the maximum a posteriori Bayesian framework, such regularizer can be systematically linked with the distribution of the data.

Image Denoising

A theory of optimal convex regularization for low-dimensional recovery

no code implementations7 Dec 2021 Yann Traonmilin, Rémi Gribonval, Samuel Vaiter

To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint.

Projected gradient descent for non-convex sparse spike estimation

no code implementations12 May 2020 Yann Traonmilin, Jean-François Aujol, Arthur Leclaire

We propose a new algorithm for sparse spike estimation from Fourier measurements.

Statistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling

no code implementations17 Apr 2020 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Compressive Statistical Learning with Random Feature Moments

no code implementations22 Jun 2017 Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin

We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.

Clustering

Compressive K-means

no code implementations27 Oct 2016 Nicolas Keriven, Nicolas Tremblay, Yann Traonmilin, Rémi Gribonval

We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset.

Clustering General Classification

Phase Unmixing : Multichannel Source Separation with Magnitude Constraints

no code implementations30 Sep 2016 Antoine Deleforge, Yann Traonmilin

We consider the problem of estimating the phases of K mixed complex signals from a multichannel observation, when the mixing matrix and signal magnitudes are known.

Cannot find the paper you are looking for? You can Submit a new open access paper.