GAS: Generative Avatar Synthesis from a Single Image
GAS: Generative Avatar Synthesis from a Single Image
We present a unified and generalizable framework for synthesizing view-consistent and temporally coherent avatars from a single image, addressing the challenging task of single-image avatar generation. Existing diffusion-based methods often condition on sparse human templates (e.g., depth or normal maps), which leads to multi-view and temporal inconsistencies due to the mismatch between these signals and the true appearance of the subject. Our approach bridges this gap by combining the reconstruction power of regression-based 3D human reconstruction with the generative capabilities of a diffusion model. In a first step, an initial 3D reconstructed human through a generalized NeRF provides comprehensive conditioning, ensuring high-quality synthesis faithful to the reference appearance and structure. Subsequently, the derived geometry and appearance from the generalized NeRF serve as input to a video-based diffusion model. This strategic integration is pivotal for enforcing both multi-view and temporal consistency throughout the avatar's generation. Empirical results underscore the superior generalization ability of our proposed method, demonstrating its effectiveness across diverse in-domain and out-of-domain in-the-wild datasets.
Yixing Lu、Qin Zhao、Bo Dai、Youngjoong Kwon、Fernando De la Torre、Junting Dong
计算技术、计算机技术
Yixing Lu,Qin Zhao,Bo Dai,Youngjoong Kwon,Fernando De la Torre,Junting Dong.GAS: Generative Avatar Synthesis from a Single Image[EB/OL].(2025-08-03)[2025-08-19].https://arxiv.org/abs/2502.06957.点此复制
评论