no code implementations • 25 Aug 2020 • Chuan-Xian Ren, PengFei Ge, Peiyi Yang, Shuicheng Yan
Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic.
no code implementations • 24 Aug 2020 • Chuan-Xian Ren, PengFei Ge, Dao-Qing Dai, Hong Yan
KLN can simultaneously learn a more expressive kernel and label prediction distribution, thus, it can be used to improve the classification performance in both supervised and semi-supervised learning scenarios.
no code implementations • 23 Aug 2020 • Pengfei Ge, Chuan-Xian Ren, Jiashi Feng, Shuicheng Yan
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
no code implementations • 14 Jun 2020 • Pengfei Ge, Chuan-Xian Ren, Dao-Qing Dai, Hong Yan
In this paper, we consider a more general application scenario where the label distributions of the source and target domains are not the same.
no code implementations • 20 Feb 2020 • You-Wei Luo, Chuan-Xian Ren, PengFei Ge, Ke-Kun Huang, Yu-Feng Yu
Second, the batch-wise training manner in deep learning limits the description of the global structure.