no code implementations • 2 May 2024 • Liron Mor Yosef, Shashanka Ubaru, Lior Horesh, Haim Avron
In this paper, we present a quantum algorithm for approximating multivariate traces, i. e. the traces of matrix products.
no code implementations • 4 Feb 2024 • Oria Gruber, Haim Avron
In this work, we focus on investigating the implicit bias originating from weight initialization.
no code implementations • 7 Sep 2022 • Boris Shustin, Haim Avron, Barak Sober
In case some of the components are given analytically (e. g., if the cost function and its gradient are given explicitly, or if the tangent spaces can be computed), the algorithm can be easily adapted to use the accurate expressions instead of the approximations.
no code implementations • 10 Feb 2022 • Paz Fink Shustin, Shashanka Ubaru, Vasileios Kalantzis, Lior Horesh, Haim Avron
In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions.
no code implementations • 7 Feb 2022 • Insu Han, Amir Zandieh, Haim Avron
Our proposed GZK family, generalizes the zonal kernels (i. e., dot-product kernels on the unit sphere) by introducing radial factors in their Gegenbauer series expansion, and includes a wide range of ubiquitous kernel functions such as the entirety of dot-product kernels as well as the Gaussian and the recently introduced Neural Tangent kernels.
no code implementations • 31 Jan 2022 • Shany Shumeli, Petros Drineas, Haim Avron
Given a low rank perturbation to a matrix, we argue that a low-rank approximate correction to the (inverse) square root exists.
2 code implementations • 28 Nov 2021 • Uria Mor, Yotam Cohen, Rafael Valdes-Mas, Denise Kviatcovsky, Eran Elinav, Haim Avron
Precision medicine is a clinical approach for disease prevention, detection and treatment, which considers each individual's genetic background, environment and lifestyle.
1 code implementation • NeurIPS 2021 • Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.
no code implementations • 3 Apr 2021 • Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.
no code implementations • 4 Jan 2021 • Paz Fink Shustin, Haim Avron
Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.
no code implementations • NeurIPS 2020 • Agniva Chowdhury, Palma London, Haim Avron, Petros Drineas
Linear programming (LP) is used in many machine learning applications, such as $\ell_1$-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc.
no code implementations • 27 Sep 2020 • Neta Shoham, Haim Avron
Unfortunately, classical theory on optimal experimental design focuses on selecting examples in order to learn underparameterized (and thus, non-interpolative) models, while modern machine learning models such as deep neural networks are overparameterized, and oftentimes are trained to be interpolative.
1 code implementation • ICLR 2020 • Osman Asif Malik, Shashanka Ubaru, Lior Horesh, Misha E. Kilmer, Haim Avron
In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.
1 code implementation • ICML 2020 • Insu Han, Haim Avron, Jinwoo Shin
This paper studies how to sketch element-wise functions of low-rank matrices.
no code implementations • 5 Feb 2019 • Boris Shustin, Haim Avron
In this paper we develop the geometric components required to perform Riemannian optimization on the generalized Stiefel manifold equipped with a non-standard metric, and illustrate theoretically and numerically the use of those components and the effect of Riemannian preconditioning for solving optimization problems on the generalized Stiefel manifold.
no code implementations • 20 Dec 2018 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the *statistical dimension* of the allowed power spectrum of that class.
no code implementations • 15 Nov 2018 • Elizabeth Newman, Lior Horesh, Haim Avron, Misha Kilmer
To exemplify the elegant, matrix-mimetic algebraic structure of our $t$-NNs, we expand on recent work (Haber and Ruthotto, 2017) which interprets deep neural networks as discretizations of non-linear differential equations and introduces stable neural networks which promote superior generalization.
no code implementations • ICML 2017 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions.
no code implementations • 7 Mar 2018 • Liron Mor-Yosef, Haim Avron
Principal component regression (PCR) is a useful method for regularizing linear regression.
1 code implementation • NeurIPS 2018 • Insu Han, Haim Avron, Jinwoo Shin
A large class of machine learning techniques requires the solution of optimization problems involving spectral functions of parametric matrices, e. g. log-determinant and nuclear norm.
no code implementations • 12 Nov 2017 • Remi R. Lam, Lior Horesh, Haim Avron, Karen E. Willcox
This work takes a different perspective and targets the construction of a correction model operator with implicit attributes.
no code implementations • 2 May 2017 • Gal Shulkind, Lior Horesh, Haim Avron
We consider a class of misspecified dynamical models where the governing term is only approximately known.
no code implementations • 10 Nov 2016 • Haim Avron, Kenneth L. Clarkson, David P. Woodruff
We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear.
1 code implementation • 10 Nov 2016 • Haim Avron, Kenneth L. Clarkson, David P. Woodruff
The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations.
no code implementations • 2 Aug 2016 • Jie Chen, Haim Avron, Vikas Sindhwani
We propose a novel class of kernels to alleviate the high computational cost of large-scale nonparametric learning with kernel methods.
1 code implementation • 3 Jun 2016 • Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin
Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.
Data Structures and Algorithms
no code implementations • 29 Dec 2014 • Haim Avron, Vikas Sindhwani, Jiyan Yang, Michael Mahoney
These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e. g., Gaussian kernel).
no code implementations • NeurIPS 2014 • Haim Avron, Huy Nguyen, David Woodruff
Sketching is a powerful dimensionality reduction tool for accelerating statistical learning algorithms.
no code implementations • 3 Sep 2014 • Vikas Sindhwani, Haim Avron
In order to fully utilize "big data", it is often required to use "big models".
no code implementations • CVPR 2014 • Jiyan Yang, Vikas Sindhwani, Quanfu Fan, Haim Avron, Michael W. Mahoney
With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions.
no code implementations • NeurIPS 2013 • Haim Avron, Vikas Sindhwani, David Woodruff
Motivated by the desire to extend fast randomized techniques to nonlinear $l_p$ regression, we consider a class of structured regression problems.