no code implementations • 2 Sep 2020 • Lu Yu, Shichao Pei, Lizhong Ding, Jun Zhou, Longfei Li, Chuxu Zhang, Xiangliang Zhang
This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.
no code implementations • 9 Mar 2020 • Yong Liu, Lizhong Ding, Weiping Wang
In this paper, we study the statistical properties of kernel $k$-means and obtain a nearly optimal excess clustering risk bound, substantially improving the state-of-art bounds in the existing clustering risk analyses.
no code implementations • 9 Mar 2020 • Yong Liu, Lizhong Ding, Weiping Wang
However, the studies on learning theory for general loss functions and hypothesis spaces remain limited.
no code implementations • NeurIPS 2019 • Lizhong Ding, Mengyang Yu, Li Liu, Fan Zhu, Yong liu, Yu Li, Ling Shao
DEAN can be interpreted as a GOF game between two generative networks, where one explicit generative network learns an energy-based distribution that fits the real data, and the other implicit generative network is trained by minimizing a GOF test statistic between the energy-based distribution and the generated data, such that the underlying distribution of the generated data is close to the energy-based distribution.
no code implementations • 27 May 2019 • Yazhou Yao, Zeren Sun, Fumin Shen, Li Liu, Li-Min Wang, Fan Zhu, Lizhong Ding, Gangshan Wu, Ling Shao
To address this issue, we present an adaptive multi-model framework that resolves polysemy by visual disambiguation.
1 code implementation • 28 Feb 2019 • Yu Li, Chao Huang, Lizhong Ding, Zhongxiao Li, Yijie Pan, Xin Gao
Deep learning, which is especially formidable in handling big data, has achieved great success in various fields, including bioinformatics.
no code implementations • 13 Feb 2019 • Yong Liu, Jian Li, Guangjun Wu, Lizhong Ding, Weiping Wang
In this paper, we provide a method to approximate the CV for manifold regularization based on a notion of robust statistics, called Bouligand influence function (BIF).
no code implementations • NeurIPS 2018 • Jian Li, Yong liu, Rong Yin, Hua Zhang, Lizhong Ding, Weiping Wang
In this paper, we study the generalization performance of multi-class classification and obtain a shaper data-dependent generalization error bound with fast convergence rate, substantially improving the state-of-art bounds in the existing data-dependent generalization analysis.
1 code implementation • 16 Aug 2018 • Yu Li, Lizhong Ding, Xin Gao
We demonstrate, both theoretically and empirically, that the last weight layer of a neural network converges to a linear SVM trained on the output of the last hidden layer, for both the binary case and the multi-class case with the commonly used cross-entropy loss.
1 code implementation • 8 Jun 2018 • Yu Li, Zhongxiao Li, Lizhong Ding, Yijie Pan, Chao Huang, Yuhui Hu, Wei Chen, Xin Gao
A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting.