Search Results for author: Mingyi Shi

Found 7 papers, 2 papers with code

InterAct: Capture and Modelling of Realistic, Expressive and Interactive Activities between Two Persons in Daily Scenarios

no code implementations19 May 2024 Yinghao Huang, Leo Ho, Dafei Qin, Mingyi Shi, Taku Komura

We address the problem of accurate capture and expressive modelling of interactive behaviors happening between two persons in daily scenarios.

Taming Diffusion Probabilistic Models for Character Control

1 code implementation23 Apr 2024 Rui Chen, Mingyi Shi, Shaoli Huang, Ping Tan, Taku Komura, Xuelin Chen

We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals.

Computational Efficiency

PhaseMP: Robust 3D Pose Estimation via Phase-conditioned Human Motion Prior

no code implementations ICCV 2023 Mingyi Shi, Sebastian Starke, Yuting Ye, Taku Komura, Jungdam Won

We present a novel motion prior, called PhaseMP, modeling a probability distribution on pose transitions conditioned by a frequency domain feature extracted from a periodic autoencoder.

3D Pose Estimation Motion Estimation

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

no code implementations22 Jun 2020 Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.

Deep Video-Based Performance Cloning

no code implementations21 Aug 2018 Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances.

Neural Best-Buddies: Sparse Cross-Domain Correspondence

2 code implementations10 May 2018 Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

Correspondence between images is a fundamental problem in computer vision, with a variety of graphics applications.

Image Morphing

Cannot find the paper you are looking for? You can Submit a new open access paper.