Search Results for author: Shiyuan Yang

Found 3 papers, 1 papers with code

Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model

no code implementations16 May 2024 Zheng Gu, Shiyuan Yang, Jing Liao, Jing Huo, Yang Gao

For visual prompting, we propose a self-attention cloning (SAC) method to guide the fine-grained structural-level analogy between image examples.

Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion

no code implementations5 Feb 2024 Shiyuan Yang, Liang Hou, Haibin Huang, Chongyang Ma, Pengfei Wan, Di Zhang, Xiaodong Chen, Jing Liao

In practice, users often desire the ability to control object motion and camera movement independently for customized video creation.

Object Video Generation

Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model

1 code implementation11 Oct 2023 Shiyuan Yang, Xiaodong Chen, Jing Liao

Recently, text-to-image denoising diffusion probabilistic models (DDPMs) have demonstrated impressive image generation capabilities and have also been successfully applied to image inpainting.

Image Denoising Image Inpainting

Cannot find the paper you are looking for? You can Submit a new open access paper.