Bringing Objects to Life: training-free 4D generation from 3D objects through view consistent noise
Bringing Objects to Life: training-free 4D generation from 3D objects through view consistent noise
Recent advancements in generative models have enabled the creation of dynamic 4D content - 3D objects in motion - based on text prompts, which holds potential for applications in virtual worlds, media, and gaming. Existing methods provide control over the appearance of generated content, including the ability to animate 3D objects. However, their ability to generate dynamics is limited to the mesh datasets they were trained on, lacking any growth or structural development capability. In this work, we introduce a training-free method for animating 3D objects by conditioning on textual prompts to guide 4D generation, enabling custom general scenes while maintaining the original object's identity. We first convert a 3D mesh into a static 4D Neural Radiance Field (NeRF) that preserves the object's visual attributes. Then, we animate the object using an Image-to-Video diffusion model driven by text. To improve motion realism, we introduce a view-consistent noising protocol that aligns object perspectives with the noising process to promote lifelike movement, and a masked Score Distillation Sampling (SDS) loss that leverages attention maps to focus optimization on relevant regions, better preserving the original object. We evaluate our model on two different 3D object datasets for temporal coherence, prompt adherence, and visual fidelity, and find that our method outperforms the baseline based on multiview training, achieving better consistency with the textual prompt in hard scenarios.
Ohad Rahamim、Ori Malca、Dvir Samuel、Gal Chechik
信息科学、信息技术计算技术、计算机技术
Ohad Rahamim,Ori Malca,Dvir Samuel,Gal Chechik.Bringing Objects to Life: training-free 4D generation from 3D objects through view consistent noise[EB/OL].(2024-12-29)[2025-08-02].https://arxiv.org/abs/2412.20422.点此复制
评论