no code implementations • ECCV 2020 • Yang Liu, Qingchao Chen, Andrew Zisserman
In this paper we introduce two methods to amplify key cues in the image, and also a method to combine these and other cues when considering the interaction between a human and an object.
no code implementations • 5 May 2024 • Yuanye Liu, Zheyao Gao, Nannan Shi, Fuping Wu, Yuxin Shi, Qingchao Chen, Xiahai Zhuang
MERIT enables uncertainty quantification of the predictions to enhance reliability, and employs a logic-based combination rule to improve interpretability.
no code implementations • 20 Mar 2024 • Zhen Yu, Yang Liu, Qingchao Chen
To solve these barriers, we propose to design a novel progressive trajectory matching strategy to improve the training stability for medical image dataset distillation.
1 code implementation • 3 Mar 2024 • Jiangbo Pei, Ruizhe Li, Qingchao Chen
Specifically, we first conduct source model selection based on the proposed selection principles.
no code implementations • 10 Nov 2023 • Yinsong Xu, Jiaqi Tang, Aidong Men, Qingchao Chen
Then, we incorporate the human prior into the prompts, which is vital for alleviating the domain gap between natural and medical images and enhancing the applicability and usefulness of SAM in medical scenarios.
1 code implementation • ICCV 2023 • Ting Lei, Fabian Caba, Qingchao Chen, Hailin Jin, Yuxin Peng, Yang Liu
This observation motivates us to design an HOI detector that can be trained even with long-tailed labeled data and can leverage existing knowledge from pre-trained models.
no code implementations • 6 Aug 2023 • Yinsong Xu, Aidong Men, Yang Liu, Qingchao Chen
To answer the first question, we empirically observed an interesting Spontaneous Pulling (SP) Effect in fine-tuning where the discrepancies between any two of the three domains (ImageNet, Source, Target) decrease but at the cost of the impaired semantic structure of the pre-train domain.
1 code implementation • ICCV 2023 • Zijing Zhao, Sitong Wei, Qingchao Chen, Dehui Li, Yifan Yang, Yuxin Peng, Yang Liu
This helps the student model capture target domain characteristics and become a more data-efficient learner to gain knowledge from the limited number of pseudo boxes.
1 code implementation • ICCV 2023 • Yang Liu, Jiahua Zhang, Qingchao Chen, Yuxin Peng
Visual grounding aims at localizing the target object in image which is most related to the given free-form natural language query.
1 code implementation • 30 Aug 2022 • Jiangbo Pei, Zhuqing Jiang, Aidong Men, Liang Chen, Yang Liu, Qingchao Chen
Secondly, based on the UTR, we propose a novel Calibrated Adaption Framework (CAF) for SFUDA, including i)the source knowledge calibration module that guides the target model to learn the transferable source knowledge and discard the non-transferable one, and ii)the target semantics calibration module that calibrates the unreliable semantics.
1 code implementation • 28 Aug 2022 • Yinsong Xu, Zhuqing Jiang, Aidong Men, Yang Liu, Qingchao Chen
Existing domain adaptation methods assume that domain discrepancies are caused by a few discrete attributes and variations, e. g., art, real, painting, quickdraw, etc.
1 code implementation • 11 Aug 2022 • Jianan Han, Shaoxing Zhang, Aidong Men, Yang Liu, Ziming Yao, Yan Yan, Qingchao Chen
$S^3VE$ is a large-scale dataset including synchronized infrared video and EEG signal for sleep stage classification, including 105 subjects and 154, 573 video clips that is more than 1100 hours long.
1 code implementation • CVPR 2022 • Minghang Zheng, Yanjie Huang, Qingchao Chen, Yuxin Peng, Yang Liu
Moreover, they train their model to distinguish positive visual-language pairs from negative ones randomly collected from other videos, ignoring the highly confusing video segments within the same video.
Ranked #7 on Temporal Sentence Grounding on Charades-STA
no code implementations • CVPR 2021 • Yang Liu, Qingchao Chen, Samuel Albanie
In this paper, we study the task of visual-text retrieval in the highly practical setting in which labelled visual data with paired text descriptions are available in one domain (the "source"), but only unlabelled visual data (without text descriptions) are available in the domain of interest (the "target").
no code implementations • 29 Aug 2020 • Qianye Yang, Yunguan Fu, Francesco Giganti, Nooshin Ghavami, Qingchao Chen, J. Alison Noble, Tom Vercauteren, Dean Barratt, Yipeng Hu
Morphological analysis of longitudinal MR images plays a key role in monitoring disease progression for prostate cancer patients, who are placed under an active surveillance program.
no code implementations • CVPR 2018 • Qingchao Chen, Yang Liu, Zhaowen Wang, Ian Wassell, Kevin Chetty
In this paper, we propose the Re-weighted Adversarial Adaptation Network (RAAN) to reduce the feature distribution divergence and adapt the classifier when domain discrepancies are disparate.
Open-Ended Question Answering Unsupervised Domain Adaptation