Search Results for author: Zhaojun Guo

Found 2 papers, 1 papers with code

What Makes Good Few-shot Examples for Vision-Language Models?

no code implementations22 May 2024 Zhaojun Guo, Jinghui Lu, Xuejing Liu, Rui Zhao, Zhenxing Qian, Fei Tan

Despite the notable advancements achieved by leveraging pre-trained vision-language (VL) models through few-shot tuning for downstream tasks, our detailed empirical study highlights a significant dependence of few-shot learning outcomes on the careful selection of training examples - a facet that has been previously overlooked in research.

Deeply Coupled Cross-Modal Prompt Learning

1 code implementation29 May 2023 Xuejing Liu, Wei Tang, Jinghui Lu, Rui Zhao, Zhaojun Guo, Fei Tan

Recent advancements in multimodal foundation models (e. g., CLIP) have excelled in zero-shot generalization.

Domain Adaptation Few-Shot Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.