no code implementations • 6 May 2024 • Jose Blanchet, Peng Cui, Jiajin Li, Jiashuo Liu
Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications.
no code implementations • 11 Apr 2024 • He Chen, Jiajin Li, Anthony Man-Cho So
Despite the considerable success of Bregman proximal-type algorithms, such as mirror descent, in machine learning, a critical question remains: Can existing stationarity measures, often based on Bregman divergence, reliably distinguish between stationary and non-stationary points?
no code implementations • 21 Mar 2024 • Jose Blanchet, Jiajin Li, Markus Pelger, Greg Zanotti
In this paper, we propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function.
no code implementations • 10 Aug 2023 • Jose Blanchet, Daniel Kuhn, Jiajin Li, Bahar Taskesen
In the past few years, there has been considerable interest in two prominent approaches for Distributionally Robust Optimization (DRO): Divergence-based and Wasserstein-based methods.
2 code implementations • 12 Mar 2023 • Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, Jose Blanchet
This observation allows us to provide an approximation bound for the distance between the fixed-point set of BAPG and the critical point set of GW.
1 code implementation • NeurIPS 2023 • Lemin Kong, Jiajin Li, Jianheng Tang, Anthony Man-Cho So
Gromov-Wasserstein (GW) distance is a powerful tool for comparing and aligning probability distributions supported on different metric spaces.
1 code implementation • 30 Jan 2023 • Jianheng Tang, Weiqi Zhang, Jiajin Li, Kangfei Zhao, Fugee Tsung, Jia Li
As the graphs to be aligned are usually constructed from different sources, the inconsistency issues of structures and features between two graphs are ubiquitous in real-world applications.
no code implementations • 28 Nov 2022 • Yiping Lu, Jiajin Li, Lexing Ying, Jose Blanchet
The optimal design of experiments typically involves solving an NP-hard combinatorial optimization problem.
no code implementations • 4 Oct 2022 • Jiajin Li, Sirui Lin, Jose Blanchet, Viet Anh Nguyen
Distributionally robust optimization has been shown to offer a principled way to regularize learning models.
no code implementations • 22 Sep 2022 • Jiajin Li, Linglingzhi Zhu, Anthony Man-Cho So
Specifically, we consider the setting where the primal function has a nonsmooth composite structure and the dual function possesses the Kurdyka-Lojasiewicz (KL) property with exponent $\theta \in [0, 1)$.
1 code implementation • 31 May 2022 • Jianheng Tang, Jiajin Li, Ziqi Gao, Jia Li
Graph Neural Networks (GNNs) are widely applied for graph anomaly detection.
no code implementations • 17 May 2022 • Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, Jose Blanchet
In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks.
1 code implementation • 28 Jan 2022 • Lingxiao Li, Noam Aigerman, Vladimir G. Kim, Jiajin Li, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon
We present an end-to-end method to learn the proximal operator of a family of training problems so that multiple local minima can be quickly obtained from initial guesses by iterating the learned operator, emulating the proximal-point algorithm that has fast convergence.
no code implementations • NeurIPS 2021 • Carson Kent, Jiajin Li, Jose Blanchet, Peter W. Glynn
We propose a novel Frank-Wolfe (FW) procedure for the optimization of infinite-dimensional functionals of probability measures - a task which arises naturally in a wide range of areas including statistical learning (e. g. variational inference) and artificial intelligence (e. g. generative adversarial networks).
no code implementations • NeurIPS 2021 • Jia Li, Jiajin Li, Yang Liu, Jianwei Yu, Yueting Li, Hong Cheng
In this paper, we consider an inverse problem in graph learning domain -- ``given the graph representations smoothed by Graph Convolutional Network (GCN), how can we reconstruct the input graph signal?"
no code implementations • 1 Jan 2021 • Jiajin Li, Stephen Hwang, Luke Zhang, Jae Hoon Sul
Here, we propose CNV-Net, a novel approach for CNV detection using a six-layer convolutional neural network.
1 code implementation • NeurIPS 2020 • Jiajin Li, Caihua Chen, Anthony Man-Cho So
In this paper, we focus on a family of Wasserstein distributionally robust support vector machine (DRSVM) problems and propose two novel epigraphical projection-based incremental algorithms to solve them.
1 code implementation • NeurIPS 2020 • Jia Li, Tomasyu Yu, Jiajin Li, Honglei Zhang, Kangfei Zhao, Yu Rong, Hong Cheng, Junzhou Huang
In this work, we present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
no code implementations • 26 Jun 2020 • Jiajin Li, Anthony Man-Cho So, Wing-Kin Ma
Many contemporary applications in signal processing and machine learning give rise to structured non-convex non-smooth optimization problems that can often be tackled by simple iterative methods quite effectively.
no code implementations • ICLR 2020 • Baoxiang Wang, Shuai Li, Jiajin Li, Siu On Chan
We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose the bets until the target is reached.
1 code implementation • NeurIPS 2019 • Jiajin Li, Sen Huang, Anthony Man-Cho So
In this paper, we take a first step towards resolving the above difficulty by developing a first-order algorithmic framework for tackling a class of Wasserstein distance-based distributionally robust logistic regression (DRLR) problem.
1 code implementation • 28 Oct 2019 • Jiajin Li, Sen Huang, Anthony Man-Cho So
In this paper, we take a first step towards resolving the above difficulty by developing a first-order algorithmic framework for tackling a class of Wasserstein distance-based distributionally robust logistic regression (DRLR) problem.
1 code implementation • 9 May 2018 • Jiajin Li, Baoxiang Wang
Policy optimization on high-dimensional continuous control tasks exhibits its difficulty caused by the large variance of the policy gradient estimators.