1 code implementation • 21 Aug 2023 • Mingkai Zheng, Shan You, Lang Huang, Xiu Su, Fei Wang, Chen Qian, Xiaogang Wang, Chang Xu
Moreover, to further boost the performance, we propose ``distributional consistency" as a more informative regularization to enable similar instances to have a similar probability distribution.
2 code implementations • ICCV 2023 • Mingkai Zheng, Shan You, Lang Huang, Chen Luo, Fei Wang, Chen Qian, Chang Xu
Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor.
1 code implementation • NeurIPS 2023 • Tao Huang, Yuan Zhang, Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Chang Xu
To address this, we propose to denoise student features using a diffusion model trained by teacher features.
1 code implementation • 21 Apr 2023 • Mingkai Zheng, Xiu Su, Shan You, Fei Wang, Chen Qian, Chang Xu, Samuel Albanie
We investigate the potential of GPT-4~\cite{gpt4} to perform Neural Architecture Search (NAS) -- the task of designing effective neural architectures.
1 code implementation • 26 Oct 2022 • Haoyu Xie, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, Chang Xu
In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space.
1 code implementation • 26 May 2022 • Lang Huang, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, Toshihiko Yamasaki
We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate only on the visible ones.
1 code implementation • CVPR 2022 • Lang Huang, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, Toshihiko Yamasaki
In this paper, we present a new approach, Learning Where to Learn (LEWEL), to adaptively aggregate spatial information of features, so that the projected embeddings could be exactly aligned and thus guide the feature learning better.
no code implementations • 16 Mar 2022 • Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.
Ranked #60 on Self-Supervised Image Classification on ImageNet
1 code implementation • CVPR 2022 • Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, Chang Xu
Learning with few labeled data has been a longstanding problem in the computer vision and machine learning research community.
1 code implementation • ICCV 2021 • Mingkai Zheng, Fei Wang, Shan You, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Specifically, our proposed framework is based on two projection heads, one of which will perform the regular instance discrimination task.
2 code implementations • NeurIPS 2021 • Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.
Ranked #78 on Self-Supervised Image Classification on ImageNet
1 code implementation • 25 Jun 2021 • Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Vision transformers (ViTs) inherited the success of NLP but their structures have not been sufficiently investigated and optimized for visual tasks.
no code implementations • 11 Jun 2021 • Xiu Su, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code.