no code implementations • 16 Apr 2024 • Yiqian Wu, Hao Xu, Xiangjun Tang, Xien Chen, Siyu Tang, Zhebin Zhang, Chen Li, Xiaogang Jin
Existing neural rendering-based text-to-3D-portrait generation methods typically make use of human geometry prior and diffusion models to obtain guidance.
1 code implementation • 27 Feb 2024 • Wei Xiang, Haoteng Yin, He Wang, Xiaogang Jin
Pedestrian trajectory prediction is the key technology in many applications for providing insights into human behavior and anticipating human future motions.
no code implementations • 24 Feb 2024 • ZiYi Yang, Xinyu Gao, Yangtian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin
The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality.
no code implementations • 2 Jan 2024 • Guying Lin, Lei Yang, YuAn Liu, Congyi Zhang, Junhui Hou, Xiaogang Jin, Taku Komura, John Keyser, Wenping Wang
Sampling against this intrinsic frequency following the Nyquist-Sannon sampling theorem allows us to determine an appropriate training sampling rate.
1 code implementation • 19 Oct 2023 • ZiYi Yang, Yanzhen Chen, Xinyu Gao, Yazhen Yuan, Yu Wu, Xiaowei Zhou, Xiaogang Jin
Implicit neural representation has opened up new possibilities for inverse rendering.
no code implementations • 6 Oct 2023 • Luyuan Wang, Yiqian Wu, YongLiang Yang, Chen Liu, Xiaogang Jin
In this paper, we present a novel photo-realistic portrait generation framework that can effectively mitigate the ''uncanny valley'' effect and improve the overall authenticity of rendered portraits.
1 code implementation • 22 Sep 2023 • ZiYi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, Xiaogang Jin
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
1 code implementation • 1 Sep 2023 • Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou, Xiaogang Jin
Finally, we propose a training regime based on representation learning and data augmentation, by training the model on data with masking.
no code implementations • 9 Aug 2023 • Xinyu Gao, ZiYi Yang, Yunlu Zhao, Yuxiang Sun, Xiaogang Jin, Changqing Zou
Mainly, our work introduces a new surface representation known as Neural Depth Fields (NeDF) that quickly determines the spatial relationship between objects by allowing direct intersection computation between rays and implicit surfaces.
no code implementations • 27 Jul 2023 • Yiqian Wu, Hao Xu, Xiangjun Tang, Hongbo Fu, Xiaogang Jin
We then propose 3DPortraitGAN, the first 3D-aware one-quarter headshot portrait generator that learns a canonical 3D avatar distribution from the 360{\deg}PHQ dataset with body pose self-learning.
1 code implementation • 21 Jun 2023 • Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao, Songnan Li, Qilong Kou, Xiaogang Jin
Styled online in-between motion generation has important application scenarios in computer animation and games.
no code implementations • 24 Apr 2023 • Wanglong Lu, Xianta Jiang, Xiaogang Jin, Yong-Liang Yang, Minglun Gong, Tao Wang, Kaijie Shi, Hanli Zhao
Image inpainting is the task of filling in missing or masked region of an image with semantically meaningful contents.
no code implementations • ICCV 2023 • Yiqian Wu, Jing Zhang, Hongbo Fu, Xiaogang Jin
To better validate our pose-conditional 3D-aware generators, we develop a new FID measure to evaluate the 3D-level performance.
no code implementations • 5 May 2022 • Xiangjun Tang, Wenxin Sun, Yong-Liang Yang, Xiaogang Jin
In the second stage, we first reshape the reconstructed 3D face using a parametric reshaping model reflecting the weight change of the face, and then utilize the reshaped 3D face to guide the warping of video frames.
no code implementations • 5 May 2022 • Xiangjun Tang, He Wang, Bo Hu, Xu Gong, Ruifan Yi, Qilong Kou, Xiaogang Jin
Then, during generation, we design a transition model which is essentially a sampling strategy to sample from the learned manifold, based on the target frame and the aimed transition duration.
1 code implementation • 3 May 2022 • Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha
We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates.
1 code implementation • 13 Feb 2022 • Wanglong Lu, Hanli Zhao, Xianta Jiang, Xiaogang Jin, YongLiang Yang, Min Wang, Jiankai Lyu, Kaijie Shi
We introduce a novel attribute similarity metric to encourage networks to learn the style of facial attributes from the exemplar in a self-supervised way.
2 code implementations • CVPR 2022 • Yiqian Wu, Yong-Liang Yang, Xiaogang Jin
Removing hair from portrait images is challenging due to the complex occlusions between hair and face, as well as the lack of paired portrait data with/without hair.
no code implementations • 8 Jan 2021 • Nannan Wu, Qianwen Chao, Yanzhen Chen, Weiwei Xu, Chen Liu, Dinesh Manocha, Wenxin Sun, Yi Han, Xinran Yao, Xiaogang Jin
Given a query shape and pose of the virtual agent, we synthesize the resulting clothing deformation by blending the Taylor expansion results of nearby anchoring points.
Graphics
1 code implementation • 14 Nov 2019 • Bo Wang, Quan Chen, Min Zhou, Zhiqiang Zhang, Xiaogang Jin, Kun Gai
Feature matters for salient object detection.
no code implementations • 19 May 2017 • Xingping Dong, Jianbing Shen, Dongming Wu, Kan Guo, Xiaogang Jin, Fatih Porikli
In this paper, we propose a new quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation.