Search Results for author: Tselil Schramm

Found 13 papers, 0 papers with code

Semidefinite programs simulate approximate message passing robustly

no code implementations15 Nov 2023 Misha Ivkov, Tselil Schramm

Approximate message passing (AMP) is a family of iterative algorithms that generalize matrix power iteration.

Spectral clustering in the Gaussian mixture block model

no code implementations29 Apr 2023 Shuangping Li, Tselil Schramm

Gaussian mixture block models are distributions over graphs that strive to model modern networks: to generate a graph from such a model, we associate each vertex $i$ with a latent feature vector $u_i \in \mathbb{R}^d$ sampled from a mixture of Gaussians, and we add edge $(i, j)$ if and only if the feature vectors are sufficiently similar, in that $\langle u_i, u_j \rangle \ge \tau$ for a pre-specified threshold $\tau$.

Clustering

The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics

no code implementations19 May 2022 Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, Ilias Zadik

We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal.

Additive models

A Robust Spectral Algorithm for Overcomplete Tensor Decomposition

no code implementations5 Mar 2022 Samuel B. Hopkins, Tselil Schramm, Jonathan Shi

We give a spectral algorithm for decomposing overcomplete order-4 tensors, so long as their components satisfy an algebraic non-degeneracy condition that holds for nearly all (all but an algebraic set of measure $0$) tensors over $(\mathbb{R}^d)^{\otimes 4}$ with rank $n \le d^2$.

Tensor Decomposition

Robust Regression Revisited: Acceleration and Improved Estimation Rates

no code implementations NeurIPS 2021 Arun Jambulapati, Jerry Li, Tselil Schramm, Kevin Tian

For the general case of smooth GLMs (e. g. logistic regression), we show that the robust gradient descent framework of Prasad et.

regression

Non-asymptotic approximations of neural networks by Gaussian processes

no code implementations17 Feb 2021 Ronen Eldan, Dan Mikulincer, Tselil Schramm

We study the extent to which wide neural networks may be approximated by Gaussian processes when initialized with random weights.

Gaussian Processes

Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent

no code implementations13 Sep 2020 Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm

Researchers currently use a number of approaches to predict and substantiate information-computation gaps in high-dimensional statistical estimation problems.

Two-sample testing

Computational Barriers to Estimation from Low-Degree Polynomials

no code implementations5 Aug 2020 Tselil Schramm, Alexander S. Wein

One fundamental goal of high-dimensional statistics is to detect or recover planted structure (such as a low-rank matrix) hidden in noisy data.

High-dimensional estimation via sum-of-squares proofs

no code implementations30 Jul 2018 Prasad Raghavendra, Tselil Schramm, David Steurer

On one hand, there is a growing body of work utilizing sum-of-squares proofs for recovering solutions to polynomial systems when the system is feasible.

Vocal Bursts Intensity Prediction

(Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs

no code implementations NeurIPS 2019 Boaz Barak, Chi-Ning Chou, Zhixian Lei, Tselil Schramm, Yueqi Sheng

Specifically, for every $\gamma>0$, we give a $n^{O(\log n)}$ time algorithm that given a pair of $\gamma$-correlated $G(n, p)$ graphs $G_0, G_1$ with average degree between $n^{\varepsilon}$ and $n^{1/153}$ for $\varepsilon = o(1)$, recovers the "ground truth" permutation $\pi\in S_n$ that matches the vertices of $G_0$ to the vertices of $G_n$ in the way that minimizes the number of mismatched edges.

Graph Matching

Fast and robust tensor decomposition with applications to dictionary learning

no code implementations27 Jun 2017 Tselil Schramm, David Steurer

We develop fast spectral algorithms for tensor decomposition that match the robustness guarantees of the best known polynomial-time algorithms for this problem based on the sum-of-squares (SOS) semidefinite programming hierarchy.

Dictionary Learning Tensor Decomposition

Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors

no code implementations8 Dec 2015 Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, David Steurer

For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent $\approx 1. 086$) that approximately recovers a component of a random 3-tensor over $\mathbb R^n$ of rank up to $\tilde \Omega(n^{4/3})$.

Tensor Decomposition

Symmetric Tensor Completion from Multilinear Entries and Learning Product Mixtures over the Hypercube

no code implementations9 Jun 2015 Tselil Schramm, Benjamin Weitz

We apply our tensor completion algorithm to the problem of learning mixtures of product distributions over the hypercube, obtaining new algorithmic results.

Low-Rank Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.