no code implementations • 26 Feb 2024 • Bowen Dong, Guanglei Yang, WangMeng Zuo, Lei Zhang
Empirical investigations on the adaptation of existing frameworks to vanilla ViT reveal that incorporating visual adapters into ViTs or fine-tuning ViTs with distillation terms is advantageous for enhancing the segmentation capability of novel classes.
1 code implementation • 28 Dec 2023 • Wan Xu, Tianyu Huang, Tianyu Qu, Guanglei Yang, Yiwen Guo, WangMeng Zuo
Few-shot class-incremental learning (FSCIL) aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.
Dimensionality Reduction Few-Shot Class-Incremental Learning +2
1 code implementation • 12 Oct 2023 • Zehao Wang, Yiwen Guo, Qizhang Li, Guanglei Yang, WangMeng Zuo
Most existing data augmentation methods tend to find a compromise in augmenting the data, \textit{i. e.}, increasing the amplitude of augmentation carefully to avoid degrading some data too much and doing harm to the model performance.
1 code implementation • 21 Aug 2023 • Jian Zou, Tianyu Huang, Guanglei Yang, Zhenhua Guo, WangMeng Zuo
The extension makes it possible to back-project the informative features, obtained by fusing features from both modalities, into their native modalities to reconstruct the multiple masked inputs.
1 code implementation • 26 Mar 2022 • Guanglei Yang, Enrico Fini, Dan Xu, Paolo Rota, Mingli Ding, Moin Nabi, Xavier Alameda-Pineda, Elisa Ricci
This problem has been widely investigated in the research community and several Incremental Learning (IL) approaches have been proposed in the past years.
1 code implementation • 1 Feb 2022 • Guanglei Yang, Enrico Fini, Dan Xu, Paolo Rota, Mingli Ding, Hao Tang, Xavier Alameda-Pineda, Elisa Ricci
To fill this gap, in this paper we introduce a novel attentive feature distillation approach to mitigate catastrophic forgetting while accounting for semantic spatial- and channel-level dependencies.
1 code implementation • 19 Nov 2021 • Guanglei Yang, Hao Tang, Humphrey Shi, Mingli Ding, Nicu Sebe, Radu Timofte, Luc van Gool, Elisa Ricci
The global alignment network aims to transfer the input image from the source domain to the target domain.
1 code implementation • 19 Nov 2021 • Guanglei Yang, Zhun Zhong, Hao Tang, Mingli Ding, Nicu Sebe, Elisa Ricci
Specifically, in the image translation stage, Bi-Mix leverages the knowledge of day-night image pairs to improve the quality of nighttime image relighting.
1 code implementation • 28 May 2021 • Guanglei Yang, Hao Tang, Zhun Zhong, Mingli Ding, Ling Shao, Nicu Sebe, Elisa Ricci
In this paper, we study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
1 code implementation • ICCV 2021 • Guanglei Yang, Hao Tang, Mingli Ding, Nicu Sebe, Elisa Ricci
While convolutional neural networks have shown a tremendous impact on various computer vision tasks, they generally demonstrate limitations in explicitly modeling long-range dependencies due to the intrinsic locality of the convolution operation.
Ranked #8 on Depth Estimation on NYU-Depth V2
1 code implementation • 5 Mar 2021 • Guanglei Yang, Paolo Rota, Xavier Alameda-Pineda, Dan Xu, Mingli Ding, Elisa Ricci
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework, leading to Variational STructured Attention networks (VISTA-Net).
1 code implementation • 1 Jan 2021 • Guanglei Yang, Paolo Rota, Xavier Alameda-Pineda, Dan Xu, Mingli Ding, Elisa Ricci
State-of-the-art performances in dense pixel-wise prediction tasks are obtained with specifically designed convolutional networks.
no code implementations • 12 Feb 2020 • Guanglei Yang, Haifeng Xia, Mingli Ding, Zhengming Ding
To balance the mitigation of domain gap and the preservation of the inherent structure, we propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.