no code implementations • 6 Mar 2024 • Rajdeep Haldar, Yue Xing, Qifan Song
The existence of adversarial attacks on machine learning models imperceptible to a human is still quite a mystery from a theoretical perspective.
no code implementations • 1 Feb 2024 • Yue Xing, Xiaofeng Lin, Namjoon Suh, Qifan Song, Guang Cheng
In practice, it is observed that transformer-based models can learn concepts in context in the inference stage.
no code implementations • 26 Jan 2024 • Yue Xing, Xiaofeng Lin, Qifan Song, Yi Xu, Belinda Zeng, Guang Cheng
Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models.
1 code implementation • 10 Nov 2023 • Jinwon Sohn, Qifan Song, Guang Lin
As the data-driven decision process becomes dominating for industrial applications, fairness-aware machine learning arouses great attention in various areas.
no code implementations • 25 Oct 2023 • Wenjie Li, Qifan Song, Jean Honorio
In this work, we study the personalized federated $\mathcal{X}$-armed bandit problem, where the heterogeneous local objectives of the clients are optimized simultaneously in the federated learning paradigm.
no code implementations • 25 Oct 2023 • Ziyi Wang, Yujie Chen, Qifan Song, Ruqi Zhang
This paper investigates low-precision sampling via Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) with low-precision and full-precision gradient accumulators for both strongly log-concave and non-log-concave distributions.
no code implementations • 30 Jul 2023 • Rajdeep Haldar, Qifan Song
Then we obtain an approximation result of such functions by a neural network.
1 code implementation • 23 Jun 2023 • Sehwan Kim, Qifan Song, Faming Liang
In the new formulation, the discriminator converges to a fixed point while the generator converges to a distribution at the Nash equilibrium.
no code implementations • 4 Jun 2023 • Hanbyul Lee, Rahul Mazumder, Qifan Song, Jean Honorio
Most of the existing works on provable guarantees for low-rank matrix completion algorithms rely on some unrealistic assumptions such that matrix entries are sampled randomly or the sampling pattern has a specific structure.
1 code implementation • 7 Mar 2023 • Wenjie Li, Haoze Li, Jean Honorio, Qifan Song
We introduce a Python open-source library for $\mathcal{X}$-armed bandit and online blackbox optimization named PyXAB.
no code implementations • 3 Feb 2023 • Hanbyul Lee, Qifan Song, Jean Honorio
We analyze a practical algorithm for sparse PCA on incomplete and noisy data under a general non-random sampling scheme.
1 code implementation • 30 May 2022 • Wenjie Li, Qifan Song, Jean Honorio, Guang Lin
This work establishes the first framework of federated $\mathcal{X}$-armed bandit, where different clients face heterogeneous local objective functions defined on the same domain and are required to collaboratively figure out the global optimum.
no code implementations • 30 May 2022 • Hanbyul Lee, Qifan Song, Jean Honorio
We study a practical algorithm for sparse principal component analysis (PCA) of incomplete and noisy data.
no code implementations • 23 Feb 2022 • Yue Xing, Qifan Song, Guang Cheng
In some studies \citep[e. g.,][]{zhang2016understanding} of deep learning, it is observed that over-parametrized deep neural networks achieve a small testing error even when the training error is almost zero.
no code implementations • 14 Feb 2022 • Yue Xing, Qifan Song, Guang Cheng
The recent proposed self-supervised learning (SSL) approaches successfully demonstrate the great potential of supplementing learning algorithms with additional unlabeled data.
no code implementations • NeurIPS 2021 • Yue Xing, Qifan Song, Guang Cheng
In contrast, this paper studies the algorithmic stability of a generic adversarial training algorithm, which can further help to establish an upper bound for generalization error.
1 code implementation • 17 Jun 2021 • Wenjie Li, Chi-Hua Wang, Guang Cheng, Qifan Song
In this paper, we make the key delineation on the roles of resolution and statistical uncertainty in hierarchical bandits-based black-box optimization algorithms, guiding a more general analysis and a more efficient algorithm design.
1 code implementation • 25 Feb 2021 • Yan Sun, Qifan Song, Faming Liang
Deep learning has been the engine powering many successes of data science.
1 code implementation • NeurIPS 2020 • Jincheng Bai, Qifan Song, Guang Cheng
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions.
no code implementations • 24 Oct 2020 • Jincheng Bai, Qifan Song, Guang Cheng
We propose a variational Bayesian (VB) procedure for high-dimensional linear model inferences with heavy tail shrinkage priors, such as student-t prior.
no code implementations • 20 Sep 2020 • Sehwan Kim, Qifan Song, Faming Liang
Bayesian deep learning offers a principled way to address many issues concerning safety of artificial intelligence (AI), such as model uncertainty, model interpretability, and prediction bias.
no code implementations • 15 Aug 2020 • Yue Xing, Qifan Song, Guang Cheng
Modern machine learning and deep learning models are shown to be vulnerable when testing data are slightly perturbed.
no code implementations • 13 Feb 2020 • Yue Xing, Qifan Song, Guang Cheng
We consider a data corruption scenario in the classical $k$ Nearest Neighbors ($k$-NN) algorithm, that is, the testing data are randomly perturbed.
1 code implementation • 7 Feb 2020 • Qifan Song, Yan Sun, Mao Ye, Faming Liang
Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters.
no code implementations • 25 Sep 2019 • Yue Xing, Qifan Song, Guang Cheng
The over-parameterized models attract much attention in the era of data science and deep learning.
no code implementations • 5 Oct 2018 • Yue Xing, Qifan Song, Guang Cheng
In the era of deep learning, understanding over-fitting phenomenon becomes increasingly important.