no code implementations • 30 Jan 2022 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Computing a Gaussian process (GP) posterior has a computational cost cubical in the number of historical points.
1 code implementation • 17 Jan 2022 • Giacomo Meanti, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.
1 code implementation • 21 Oct 2021 • Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.
no code implementations • NeurIPS 2021 • Luigi Carratino, Stefano Vigogna, Daniele Calandriello, Lorenzo Rosasco
We introduce ParK, a new large-scale solver for kernel ridge regression.
no code implementations • 16 Jun 2021 • Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco
In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.
1 code implementation • NeurIPS 2020 • Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size.
1 code implementation • 10 Jun 2020 • Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert
We show that Mixup can be interpreted as standard empirical risk minimization estimator subject to a combination of data transformation and random perturbation of the transformed data.
Ranked #75 on Image Classification on ObjectNet (using extra training data)
1 code implementation • ICML 2020 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Gaussian processes (GP) are one of the most successful frameworks to model uncertainty.
1 code implementation • 13 Mar 2019 • Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco
Moreover, we show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where $d_{eff}$ is the effective dimension of the explored space, which is typically much smaller than both $d$ and $t$.
1 code implementation • NeurIPS 2018 • Alessandro Rudi, Daniele Calandriello, Luigi Carratino, Lorenzo Rosasco
Leverage score sampling provides an appealing way to perform approximate computations for large matrices.
no code implementations • NeurIPS 2018 • Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms.
4 code implementations • NeurIPS 2017 • Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco
In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points.