no code implementations • 31 Mar 2024 • Taekyung Ki, Dongchan Min, Gyeongsu Chae
In this paper, we present Export3D, a one-shot 3D-aware portrait animation method that is able to control the facial expression and camera view of a given portrait image.
1 code implementation • 14 Mar 2023 • Geumbyeol Hwang, Sunwon Hong, SeungHyun Lee, Sungwoo Park, Gyeongsu Chae
We enhance the efficiency of DisCoHead by integrating a dense motion estimator and the encoder of a generator which are originally separate modules.
no code implementations • 14 Mar 2023 • Jungjun Kim, Changjin Han, Gyuhyeon Nam, Gyeongsu Chae
Also, a handcrafted post-processing system is needed to address the problems relevant to the tone of the characters.
1 code implementation • 22 Mar 2021 • Wai Ting Cheung, Gyeongsu Chae
State-of-the-art self-supervised image animation approaches warp the source image according to the motion of the driving video and recover the warping artifacts by inpainting.
no code implementations • ICCV 2021 • Patrick Kwon, Jaeseong You, Gyuhyeon Nam, Sungwoo Park, Gyeongsu Chae
A variety of effective face-swap and face-reenactment methods have been publicized in recent years, democratizing the face synthesis technology to a great extent.
no code implementations • 9 Mar 2021 • Jaeseong You, Dalhyun Kim, Gyuhyeon Nam, Geumbyeol Hwang, Gyeongsu Chae
Several of the latest GAN-based vocoders show remarkable achievements, outperforming autoregressive and flow-based competitors in both qualitative and quantitative measures while synthesizing orders of magnitude faster.
no code implementations • 16 Feb 2021 • Jaeseong You, Gyuhyeon Nam, Dalhyun Kim, Gyeongsu Chae
We propose a novel architecture and improved training objectives for non-parallel voice conversion.
no code implementations • 1 Jan 2021 • Wai Ting Cheung, Gyeongsu Chae
Image animation generates a video of a source image following the motion of a driving video.