no code implementations • 10 Dec 2023 • Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong Xu
This paper presents a video inversion approach for zero-shot video editing, which aims to model the input video with low-rank representation during the inversion process.
2 code implementations • 23 Oct 2023 • Maomao Li, Ge Yuan, Cairong Wang, Zhian Liu, Yong Zhang, Yongwei Nie, Jue Wang, Dong Xu
Based on this disentanglement, face swapping can be simplified as style and mask swapping.
1 code implementation • 8 Jun 2023 • Ge Yuan, Maomao Li, Yong Zhang, Huicheng Zheng
To avoid the potential artifacts and drive the distribution of the network output close to the natural one, we reversely take synthetic images as input while the real face as reliable supervision during the training stage of face swapping.
1 code implementation • NeurIPS 2023 • Ge Yuan, Xiaodong Cun, Yong Zhang, Maomao Li, Chenyang Qi, Xintao Wang, Ying Shan, Huicheng Zheng
Empowered by the proposed celeb basis, the new identity in our customized model showcases a better concept combination ability than previous personalization methods.
no code implementations • CVPR 2023 • Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang, Yongwei Nie
We rethink face swapping from the perspective of fine-grained face editing, \textit{i. e., ``editing for swapping'' (E4S)}, and propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
no code implementations • 1 Sep 2022 • Yangtao Wang, Xi Shen, Yuan Yuan, Yuming Du, Maomao Li, Shell Xu Hu, James L Crowley, Dominique Vaufreydaz
This method also achieves competitive results for unsupervised video object segmentation tasks with the DAVIS, SegTV2, and FBMS datasets.
Ranked #4 on Unsupervised Instance Segmentation on COCO val2017
1 code implementation • CVPR 2022 • Zhihui Lin, Tianyu Yang, Maomao Li, Ziyu Wang, Chun Yuan, Wenhao Jiang, Wei Liu
Matching-based methods, especially those based on space-time memory, are significantly ahead of other solutions in semi-supervised video object segmentation (VOS).
Semantic Segmentation Semi-Supervised Video Object Segmentation +1
1 code implementation • CVPR 2022 • Shuangrui Ding, Maomao Li, Tianyu Yang, Rui Qian, Haohang Xu, Qingyi Chen, Jue Wang, Hongkai Xiong
To alleviate such bias, we propose \textbf{F}oreground-b\textbf{a}ckground \textbf{Me}rging (FAME) to deliberately compose the moving foreground region of the selected video onto the static background of others.
2 code implementations • AAAI 2020 • Zhihui Lin, Maomao Li, Zhuobin Zheng, Yangyang Cheng, Chun Yuan
To extract spatial features with both global and local dependencies, we introduce the self-attention mechanism into ConvLSTM.
Ranked #23 on Video Prediction on Moving MNIST