no code implementations • ECCV 2020 • Jianqiao An, Yucheng Shi, Yahong Han, Meijun Sun, Qi Tian
For a certain object in an image, the relationship between its central region and the peripheral region is not well utilized in existing superpixel segmentation methods.
no code implementations • 28 Apr 2024 • Mingshi Yan, Fan Liu, Jing Sun, Fuming Sun, Zhiyong Cheng, Yahong Han
Our proposed Behavior-Contextualized Item Preference Network discerns and learns users' specific item preferences within each behavior.
1 code implementation • 9 Mar 2024 • Runhua Jiang, Yahong Han
To address this issue, we propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions.
no code implementations • 28 Feb 2024 • Deng Li, Aming Wu, YaoWei Wang, Yahong Han
In this paper, we propose a dynamic object-centric perception network based on prompt learning, aiming to adapt to the variations in image complexity.
no code implementations • 5 Jan 2024 • Yikang Wei, Yahong Han
Federated Domain Generalization aims to learn a domain-invariant model from multiple decentralized source domains for deployment on unseen target domain.
2 code implementations • 28 Sep 2023 • Nana Yu, Hong Shi, Yahong Han
Specifically, the proposed method, so-called Joint Correcting and Refinement Network (JCRNet), which mainly consists of three stages to balance brightness, color, and illumination of enhancement.
2 code implementations • 28 Sep 2023 • Yidan Fan, Yongxin Yu, Wenhuan Lu, Yahong Han
Our approach takes into account snippet-level encoded features without the supervision of pseudo labels.
no code implementations • CVPR 2023 • Zixuan Qin, Liu Yang, Qilong Wang, Yahong Han, QinGhua Hu
When there are large differences in data distribution among clients, it is crucial for federated learning to design a reliable client selection strategy and an interpretable client communication framework to better utilize group knowledge.
1 code implementation • 26 Dec 2022 • Deng Li, Aming Wu, Yahong Han, Qi Tian
Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios.
no code implementations • 11 Mar 2022 • YaoWei Wang, Zhouxin Yang, Rui Liu, Deng Li, Yuandu Lai, Leyuan Fang, Yahong Han
Considering the diversity and complexity of scenes in intelligent city governance, we build a large-scale object detection benchmark for the smart city.
no code implementations • 17 Jan 2022 • Xu Chen, Yahong Han, Xiaohan Wang, Yifan Sun, Yi Yang
An effective approach is to select informative content from the holistic video, yielding a popular family of dynamic video recognition methods.
Ranked #42 on Action Recognition on Something-Something V1
no code implementations • 16 Dec 2021 • Rui Liu, Yahong Han, YaoWei Wang, Qi Tian
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
1 code implementation • 7 Dec 2021 • Yucheng Shi, Yahong Han, Yu-an Tan, Xiaohui Kuang
On the other hand, the neglect of noise sensitivity differences between image regions by existing decision-based attacks further compromises the efficiency of noise compression, especially for ViTs.
1 code implementation • ICCV 2021 • Aming Wu, Rui Liu, Yahong Han, Linchao Zhu, Yi Yang
Secondly, domain-specific representations are introduced as the differences between the input and domain-invariant representations.
no code implementations • 21 Jul 2021 • Kunhong Wu, Yucheng Shi, Yahong Han, Yunfeng Shao, Bingshuai Li, Qi Tian
Existing unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain.
no code implementations • 30 Apr 2021 • Yuandu Lai, Yahong Han, YaoWei Wang
Recent efforts towards video anomaly detection (VAD) try to learn a deep autoencoder to describe normal event patterns with small reconstruction errors.
no code implementations • 27 Apr 2021 • Yuandu Lai, Yucheng Shi, Yahong Han, Yunfeng Shao, Meiyu Qi, Bingshuai Li
In this paper, We explore the uncertainty in deep learning to construct the prediction intervals.
1 code implementation • ICCV 2021 • Aming Wu, Yahong Han, Linchao Zhu, Yi Yang
Thus, we develop a new framework of few-shot object detection with universal prototypes ({FSOD}^{up}) that owns the merit of feature generalization towards novel objects.
Ranked #23 on Few-Shot Object Detection on MS-COCO (10-shot)
no code implementations • IEEE Transactions on Circuits and Systems for Video Technology 2020 • Aming Wu, Yahong Han, Zhou Zhao, Yi Yang
In this article, we devise a novel memory decoder for visual narrating.
Ranked #13 on Visual Storytelling on VIST
no code implementations • 27 Feb 2020 • Aming Wu, Yahong Han
Instead of the common practice, i. e., sequence decoding with RNN, in this paper, we devise a novel memory decoder for video captioning.
1 code implementation • NeurIPS 2019 • Aming Wu, Linchao Zhu, Yahong Han, Yi Yang
Inspired by this idea, towards VCR, we propose a connective cognition network (CCN) to dynamically reorganize the visual neuron connectivity that is contextualized by the meaning of questions and answers.
no code implementations • 20 Nov 2019 • Aming Wu, Yahong Han, Linchao Zhu, Yi Yang
Most state-of-the-art methods of object detection suffer from poor generalization ability when the training and test data are from different domains, e. g., with different styles.
1 code implementation • CVPR 2019 • Yucheng Shi, Siyu Wang, Yahong Han
On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories.
no code implementations • 25 Apr 2018 • Bo Wang, Youjiang Xu, Yahong Han, Richang Hong
Movies provide us with a mass of visual content as well as attracting stories.
no code implementations • 6 Nov 2015 • Shichao Zhao, Yanbin Liu, Yahong Han, Richang Hong
It achieves the accuracy of 93. 78\% on UCF101 which is the state-of-the-art and the accuracy of 65. 62\% on HMDB51 which is comparable to the state-of-the-art.