Search Results for author: Yuyang Yin

Found 4 papers, 0 papers with code

ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance

no code implementations27 May 2024 Jiannan Huang, Jun Hao Liew, Hanshu Yan, Yuyang Yin, Yao Zhao, Yunchao Wei

Recent text-to-image customization works have been proven successful in generating images of given concepts by fine-tuning the diffusion models on a few examples.

Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models

no code implementations26 May 2024 Hanwen Liang, Yuyang Yin, Dejia Xu, Hanxue Liang, Zhangyang Wang, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei

Building on this foundation, we propose a strategy to migrate the temporal consistency in video diffusion models to the spatial-temporal consistency required for 4D generation.

4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency

no code implementations28 Dec 2023 Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, Yunchao Wei

Our pipeline facilitates conditional 4D generation, enabling users to specify geometry (3D assets) and motion (monocular videos), thus offering superior control over content creation.

Prompt Engineering

CLE Diffusion: Controllable Light Enhancement Diffusion Model

no code implementations13 Aug 2023 Yuyang Yin, Dejia Xu, Chuangchuang Tan, Ping Liu, Yao Zhao, Yunchao Wei

Low light enhancement has gained increasing importance with the rapid development of visual creation and editing.

Low-Light Image Enhancement

Cannot find the paper you are looking for? You can Submit a new open access paper.