1 code implementation • 5 May 2024 • Zhixiang Chi, Li Gu, Tao Zhong, Huan Liu, Yuanhao Yu, Konstantinos N Plataniotis, Yang Wang
In this work, we propose an approach on top of the pre-computed features of the foundation model.
1 code implementation • 8 Oct 2022 • Tao Zhong, Zhixiang Chi, Li Gu, Yang Wang, Yuanhao Yu, Jin Tang
Most existing methods perform training on multiple source domains using a single model, and the same trained model is used on all unseen target domains.
Ranked #22 on Domain Generalization on DomainNet
4 code implementations • 1 Oct 2022 • Li Gu, Zhixiang Chi, Huan Liu, Yuanhao Yu, Yang Wang
In this work, we present the winning solution for ORBIT Few-Shot Video Object Recognition Challenge 2022.
1 code implementation • 22 Jul 2022 • Huan Liu, Li Gu, Zhixiang Chi, Yang Wang, Yuanhao Yu, Jun Chen, Jin Tang
In this paper, we show through empirical results that adopting the data replay is surprisingly favorable.
no code implementations • CVPR 2022 • Zhixiang Chi, Li Gu, Huan Liu, Yang Wang, Yuanhao Yu, Jin Tang
The learning objective of these methods is often hand-engineered and is not directly tied to the objective (i. e. incrementally learning new classes) during testing.
1 code implementation • ICCV 2019 • Xiaohui Zeng, Renjie Liao, Li Gu, Yuwen Xiong, Sanja Fidler, Raquel Urtasun
In practice, it performs similarly to the Hungarian algorithm during inference.
no code implementations • ICML 2018 • Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel
We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).
1 code implementation • 27 Jun 2018 • Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel
We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).