1 code implementation • 2 Jun 2023 • Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang
Employing Large Language Models (LLMs) to address mathematical problems is an intriguing research endeavor, considering the abundance of math problems expressed in natural language across numerous science and engineering fields.
no code implementations • 16 Nov 2022 • Xinyuan Cao, Jingbang Chen, Li Chen, Chris Lambert, Richard Peng, Daniel Sleator
We study learning-augmented binary search trees (BSTs) and B-Trees via Treaps with composite priorities.
no code implementations • 30 May 2021 • Li Chen, Richard Peng, Di Wang
Diffusion is a fundamental graph procedure and has been a basic building block in a wide range of theoretical and empirical applications such as graph partitioning and semi-supervised learning on graphs.
no code implementations • 18 Jan 2021 • Yu Gao, Yang P. Liu, Richard Peng
We give an algorithm for computing exact maximum flows on graphs with $m$ edges and integer capacities in the range $[1, U]$ in $\widetilde{O}(m^{\frac{3}{2} - \frac{1}{328}} \log U)$ time.
Data Structures and Algorithms
no code implementations • NeurIPS 2020 • Jiezhong Qiu, Chi Wang, Ben Liao, Richard Peng, Jie Tang
Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.
1 code implementation • ICML 2020 • Matthew Fahrbach, Gramoz Goranci, Richard Peng, Sushant Sachdeva, Chi Wang
As computing Schur complements is expensive, we give a nearly-linear time algorithm that generates a coarsened graph on the relevant vertices that provably matches the Schur complement in expectation in each iteration.
1 code implementation • 3 May 2020 • Yihe Dong, Yu Gao, Richard Peng, Ilya Razenshteyn, Saurabh Sawlani
We investigate the problem of efficiently computing optimal transport (OT) distances, which is equivalent to the node-capacitated minimum cost maximum flow problem in a bipartite graph.
1 code implementation • NeurIPS 2019 • Deeksha Adil, Richard Peng, Sushant Sachdeva
However, these algorithms often diverge for p > 3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p > 3.
no code implementations • 4 Jun 2019 • Brian Bullins, Richard Peng
We provide improved convergence rates for various \emph{non-smooth} optimization problems via higher-order accelerated methods.
no code implementations • 21 Jan 2019 • Deeksha Adil, Rasmus Kyng, Richard Peng, Sushant Sachdeva
We give improved algorithms for the $\ell_{p}$-regression problem, $\min_{x} \|x\|_{p}$ such that $A x=b,$ for all $p \in (1, 2) \cup (2,\infty).$ Our algorithms obtain a high accuracy solution in $\tilde{O}_{p}(m^{\frac{|p-2|}{2p + |p-2|}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})$ iterations, where each iteration requires solving an $m \times m$ linear system, $m$ being the dimension of the ambient space.
no code implementations • NeurIPS 2016 • Dehua Cheng, Richard Peng, Yan Liu, Ioakeim Perros
In this paper, we show ways of sampling intermediate steps of alternating minimization algorithms for computing low rank tensor CP decompositions, leading to the sparse alternating least squares (SPALS) method.
no code implementations • 12 Feb 2015 • Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng
Our work is particularly motivated by the algorithmic problems for speeding up the classic Newton's method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $q\geq1$) in a time-reversible Markov model.
no code implementations • 7 Nov 2014 • Richard Peng, He Sun, Luca Zanetti
In this paper we study variants of the widely used spectral clustering that partitions a graph into k clusters by (1) embedding the vertices of a graph into a low-dimensional space using the bottom eigenvectors of the Laplacian matrix, and (2) grouping the embedded points into k clusters via k-means algorithms.
no code implementations • 20 Oct 2014 • Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng
random samples for $n$-dimensional Gaussian random fields with SDDM precision matrices.
no code implementations • 21 Aug 2014 • Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford
In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.