no code implementations • 28 Feb 2024 • Benjamin Walker, Andrew D. McLeod, Tiexin Qin, Yichuan Cheng, Haoliang Li, Terry Lyons
The core component of Log-NCDEs is the Log-ODE method, a tool from the study of rough paths for approximating a CDE's solution.
no code implementations • 22 Feb 2023 • Tiexin Qin, Benjamin Walker, Terry Lyons, Hong Yan, Haoliang Li
Empirical evaluation on a range of dynamic graph representation learning tasks demonstrates the superiority of our proposed approach compared to the baselines.
1 code implementation • 16 May 2022 • Tiexin Qin, Shiqi Wang, Haoliang Li
Domain generalization aims to improve the generalization capability of machine learning systems to out-of-distribution (OOD) data.
1 code implementation • 10 Sep 2021 • Wenbin Li, Ziyi, Wang, Xuesong Yang, Chuanqi Dong, Pinzhuo Tian, Tiexin Qin, Jing Huo, Yinghuan Shi, Lei Wang, Yang Gao, Jiebo Luo
Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmarks with various backbone architectures to evaluate common pitfalls and effects of different training tricks.
1 code implementation • 13 Apr 2020 • Tiexin Qin, Wenbin Li, Yinghuan Shi, Yang Gao
Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.
Data Augmentation Unsupervised Few-Shot Image Classification +1
no code implementations • 22 Feb 2020 • Tiexin Qin, Ziyuan Wang, Kelei He, Yinghuan Shi, Yang Gao, Dinggang Shen
Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation.
1 code implementation • 18 Oct 2019 • Yinghuan Shi, Tiexin Qin, Yong liu, Jiwen Lu, Yang Gao, Dinggang Shen
By introducing an unified optimization goal, DeepAugNet intends to combine the data augmentation and the deep model training in an end-to-end training manner which is realized by simultaneously training a hybrid architecture of dueling deep Q-learning algorithm and a surrogate deep model.