HumanGif: Single-View Human Diffusion with Generative Prior
HumanGif: Single-View Human Diffusion with Generative Prior
Previous 3D human creation methods have made significant progress in synthesizing view-consistent and temporally aligned results from sparse-view images or monocular videos. However, it remains challenging to produce perpetually realistic, view-consistent, and temporally coherent human avatars from a single image, as limited information is available in the single-view input setting. Motivated by the success of 2D character animation, we propose HumanGif, a single-view human diffusion model with generative prior. Specifically, we formulate the single-view-based 3D human novel view and pose synthesis as a single-view-conditioned human diffusion process, utilizing generative priors from foundational diffusion models to complement the missing information. To ensure fine-grained and consistent novel view and pose synthesis, we introduce a Human NeRF module in HumanGif to learn spatially aligned features from the input image, implicitly capturing the relative camera and human pose transformation. Furthermore, we introduce an image-level loss during optimization to bridge the gap between latent and image spaces in diffusion models. Extensive experiments on RenderPeople, DNA-Rendering, THuman 2.1, and TikTok datasets demonstrate that HumanGif achieves the best perceptual performance, with better generalizability for novel view and pose synthesis.
Shoukang Hu、Takuya Narihira、Kazumi Fukuda、Ryosuke Sawata、Takashi Shibuya、Yuki Mitsufuji
计算技术、计算机技术
Shoukang Hu,Takuya Narihira,Kazumi Fukuda,Ryosuke Sawata,Takashi Shibuya,Yuki Mitsufuji.HumanGif: Single-View Human Diffusion with Generative Prior[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2502.12080.点此复制
评论