no code implementations • 1 Jun 2024 • Zhi Zhou, Ming Yang, Jiang-Xin Shi, Lan-Zhe Guo, Yu-Feng Li
In this paper, we explore a problem setting called Open-world Prompt Tuning (OPT), which involves tuning prompts on base classes and evaluating on a combination of base and new classes.
no code implementations • 5 Oct 2023 • Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.
1 code implementation • 18 Sep 2023 • Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, Yu-Feng Li
The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models.
Ranked #1 on Long-tail Learning on iNaturalist 2018
Fine-Grained Image Classification Long-tail learning with class descriptors
4 code implementations • 8 Oct 2022 • Tong Wei, Zhen Mao, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Multi-label learning has attracted significant attention from both academic and industry field in recent decades.
no code implementations • 26 May 2022 • Tong Wei, Qian-Yu Liu, Jiang-Xin Shi, Wei-Wei Tu, Lan-Zhe Guo
TRAS transforms the imbalanced pseudo-label distribution of a traditional SSL model via a delicate function to enhance the supervisory signals for minority classes.
no code implementations • 22 Oct 2021 • Tong Wei, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Deep neural networks have been shown to be very powerful methods for many supervised learning tasks.
no code implementations • 26 Aug 2021 • Tong Wei, Jiang-Xin Shi, Wei-Wei Tu, Yu-Feng Li
To overcome this limitation, we establish a new prototypical noise detection method by designing a distance-based metric that is resistant to label noise.
Ranked #25 on Image Classification on mini WebVision 1.0