Search Results for author: Yujun Shi

Found 15 papers, 12 papers with code

MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators

2 code implementations7 Apr 2024 Shenghai Yuan, Jinfa Huang, Yujun Shi, Yongqi Xu, Ruijie Zhu, Bin Lin, Xinhua Cheng, Li Yuan, Jiebo Luo

Recent advances in Text-to-Video generation (T2V) have achieved remarkable success in synthesizing high-quality general videos from textual descriptions.

Text-to-Video Generation Video Generation

Envision3D: One Image to 3D with Anchor Views Interpolation

1 code implementation13 Mar 2024 Yatian Pang, Tanghui Jia, Yujun Shi, Zhenyu Tang, Junwu Zhang, Xinhua Cheng, Xing Zhou, Francis E. H. Tay, Li Yuan

To address this issue, we propose a novel cascade diffusion framework, which decomposes the challenging dense views generation task into two tractable stages, namely anchor views generation and anchor views interpolation.

Image to 3D

DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing

3 code implementations26 Jun 2023 Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai

In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion.

Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning

1 code implementation CVPR 2023 Zeyin Song, Yifan Zhao, Yujun Shi, Peixi Peng, Li Yuan, Yonghong Tian

However, in this work, we find that the CE loss is not ideal for the base session training as it suffers poor class separation in terms of representations, which further degrades generalization to novel classes.

Contrastive Learning Few-Shot Class-Incremental Learning +1

Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning

2 code implementations1 Oct 2022 Yujun Shi, Jian Liang, Wenqing Zhang, Vincent Y. F. Tan, Song Bai

To remedy this problem caused by the data heterogeneity, we propose {\sc FedDecorr}, a novel method that can effectively mitigate dimensional collapse in federated learning.

Federated Learning

Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning

1 code implementation CVPR 2022 Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan

Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance.

Class Incremental Learning Incremental Learning

Refiner: Refining Self-attention for Vision Transformers

1 code implementation7 Jun 2021 Daquan Zhou, Yujun Shi, Bingyi Kang, Weihao Yu, Zihang Jiang, Yuan Li, Xiaojie Jin, Qibin Hou, Jiashi Feng

Vision Transformers (ViTs) have shown competitive accuracy in image classification tasks compared with CNNs.

Image Classification

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

13 code implementations ICCV 2021 Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan

To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-ViT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study.

Image Classification Language Modelling

Towards Disentangling Non-Robust and Robust Components in Performance Metric

no code implementations25 Sep 2019 Yujun Shi, Benben Liao, Guangyong Chen, Yun Liu, Ming-Ming Cheng, Jiashi Feng

Then, we show by experiments that DNNs under standard training rely heavily on optimizing the non-robust component in achieving decent performance.

Adversarial Robustness Relation

Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric

no code implementations6 Jun 2019 Yujun Shi, Benben Liao, Guangyong Chen, Yun Liu, Ming-Ming Cheng, Jiashi Feng

Despite many previous works studying the reason behind such adversarial behavior, the relationship between the generalization performance and adversarial behavior of DNNs is still little understood.

Adversarial Robustness

Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks

1 code implementation15 May 2019 Guangyong Chen, Pengfei Chen, Yujun Shi, Chang-Yu Hsieh, Benben Liao, Shengyu Zhang

Our work is based on an excellent idea that whitening the inputs of neural networks can achieve a fast convergence speed.

Learning Pixel-wise Labeling from the Internet without Human Interaction

no code implementations19 May 2018 Yun Liu, Yujun Shi, Jia-Wang Bian, Le Zhang, Ming-Ming Cheng, Jiashi Feng

Collecting sufficient annotated data is very expensive in many applications, especially for pixel-level prediction tasks such as semantic segmentation.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.