no code implementations • ICML 2020 • Zonghan Yang, Yang Liu, Chenglong Bao, Zuoqiang Shi
Although ordinary differential equations (ODEs) provide insights for designing networks architectures, its relationship with the non-residual convolutional neural networks (CNNs) is still unclear.
1 code implementation • 26 Mar 2024 • Dihan Zheng, Yihang Zou, Xiaowen Zhang, Chenglong Bao
We employ our method to generate paired training samples for real-world image denoising and super-resolution tasks.
no code implementations • 23 Mar 2024 • Tangjun Wang, Chenglong Bao, Zuoqiang Shi
Neural network can be viewed as a map from a simple base model to a complicate function.
no code implementations • 7 Dec 2023 • Zhijun Zeng, Pipi Hu, Chenglong Bao, Yi Zhu, Zuoqiang Shi
In this paper, we study the method to reconstruct dynamical systems from data without time labels.
no code implementations • 26 Sep 2023 • HUI ZHANG, Dihan Zheng, Qiurong Wu, Nieng Yan, Zuoqiang Shi, Mingxu Hu, Chenglong Bao
The single-particle cryo-EM field faces the persistent challenge of preferred orientation, lacking general computational solutions.
no code implementations • 23 Jul 2023 • Tangjun Wang, Wenqi Tao, Chenglong Bao, Zuoqiang Shi
Based on the convection-diffusion equation, we design a new training method for ResNets.
no code implementations • 6 Sep 2022 • Huaming Ling, Chenglong Bao, Xin Liang, Zuoqiang Shi
However, existing methods adopt a static affinity matrix to learn the low-dimensional representations of data points and do not optimize the affinity matrix during the learning process.
no code implementations • 30 Aug 2022 • Jintao Xu, Chenglong Bao, Wenxun Xing
Training deep neural networks (DNNs) is an important and challenging optimization problem in machine learning due to its non-convexity and non-separable structure.
no code implementations • 16 May 2022 • Wei Wan, Yuejin Zhang, Chenglong Bao, Bin Dong, Zuoqiang Shi
In this work, we propose a deep learning based method to solve the dynamic optimal transport in high dimensional space.
1 code implementation • 21 Apr 2022 • Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao
Current approaches aim at generating synthesized training data from unpaired samples by exploring the relationship between the corrupted and clean data.
1 code implementation • 14 Apr 2022 • Dihan Zheng, Chenglong Bao, Zuoqiang Shi, Haibin Ling, Kaisheng Ma
The Chan-Vese (CV) model is a classic region-based method in image segmentation.
1 code implementation • NeurIPS 2021 • Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong
Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.
no code implementations • NeurIPS 2021 • Fuchao Wei, Chenglong Bao, Yang Liu
Anderson mixing (AM) is an acceleration method for fixed-point iterations.
no code implementations • 29 Sep 2021 • Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao
Collecting the paired training data is a difficult task in practice, but the unpaired samples broadly exist.
no code implementations • ICLR 2022 • Fuchao Wei, Chenglong Bao, Yang Liu
We prove that the basic version of ST-AM is equivalent to the full-memory AM in strongly convex quadratic optimization, and with minor changes it has local linear convergence for solving general nonlinear fixed-point problems.
1 code implementation • 7 May 2021 • Tangjun Wang, Zehao Dou, Chenglong Bao, Zuoqiang Shi
In many learning tasks with limited training samples, the diffusion connects the labeled and unlabeled data points and is a critical component for achieving high classification accuracy.
1 code implementation • ICLR 2021 • Dihan Zheng, Sia Huat Tan, Xiaowen Zhang, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao
In the real-world case, the noise distribution is so complex that the simplified additive white Gaussian (AWGN) assumption rarely holds, which significantly deteriorates the Gaussian denoisers' performance.
no code implementations • 1 Jan 2021 • Zonghan Yang, Yang Liu, Chenglong Bao, Zuoqiang Shi
Deep neural networks are observed to be fragile against adversarial attacks, which have dramatically limited their practical applicability.
1 code implementation • NeurIPS 2020 • Linfeng Zhang, Yukang Shi, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao
Moreover, an orthogonal loss is applied to the feature resizing layer in TOFD to improve the performance of knowledge distillation.
1 code implementation • 10 Jun 2020 • Zonghan Yang, Yang Liu, Chenglong Bao, Zuoqiang Shi
Although ordinary differential equations (ODEs) provide insights for designing network architectures, its relationship with the non-residual convolutional neural networks (CNNs) is still unclear.
1 code implementation • CVPR 2020 • Shaokai Ye, Kailu Wu, Mu Zhou, Yunfei Yang, Sia Huat Tan, Kaidi Xu, Jiebo Song, Chenglong Bao, Kaisheng Ma
Existing domain adaptation methods aim at learning features that can be generalized among domains.
Ranked #3 on Domain Adaptation on USPS-to-MNIST
no code implementations • 27 Nov 2019 • Zhongfan Jia, Chenglong Bao, Kaisheng Ma
To the best of our knowledge, there is no study on the interpretation of modern CNNs from the perspective of the frequency proportion of filters.
no code implementations • 28 May 2019 • Shaokai Ye, Sia Huat Tan, Kaidi Xu, Yanzhi Wang, Chenglong Bao, Kaisheng Ma
On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network.
1 code implementation • NeurIPS 2019 • Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Remarkable achievements have been attained by deep neural networks in various applications.
1 code implementation • ICCV 2019 • Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself.
no code implementations • 23 Apr 2019 • Zihao Wang, Datong Zhou, Yong Zhang, Hao Wu, Chenglong Bao
As a fundamental problem of natural language processing, it is important to measure the distance between different documents.
no code implementations • 31 May 2018 • Chenglong Bao, Jae Kyu Choi, Bin Dong
Quantitative susceptibility mapping (QSM) aims to visualize the three dimensional susceptibility distribution by solving the field-to-source inverse problem using the phase data in magnetic resonance signal.
no code implementations • CVPR 2016 • Yuhui Quan, Chenglong Bao, Hui Ji
Most existing dictionary learning algorithms consider a linear sparse model, which often cannot effectively characterize the nonlinear properties present in many types of visual data, e. g. dynamic texture (DT).
no code implementations • CVPR 2014 • Chenglong Bao, Hui Ji, Yuhui Quan, Zuowei Shen
Sparse coding and dictionary learning have seen their applications in many vision tasks, which usually is formulated as a non-convex optimization problem.