|国家预印本平台
首页|Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video

Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video

Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video

来源:Arxiv_logoArxiv
英文摘要

Generating dynamic 3D object from a single-view video is challenging due to the lack of 4D labeled data. An intuitive approach is to extend previous image-to-3D pipelines by transferring off-the-shelf image generation models such as score distillation sampling.However, this approach would be slow and expensive to scale due to the need for back-propagating the information-limited supervision signals through a large pretrained model. To address this, we propose an efficient video-to-4D object generation framework called Efficient4D. It generates high-quality spacetime-consistent images under different camera views, and then uses them as labeled data to directly reconstruct the 4D content through a 4D Gaussian splatting model. Importantly, our method can achieve real-time rendering under continuous camera trajectories. To enable robust reconstruction under sparse views, we introduce inconsistency-aware confidence-weighted loss design, along with a lightly weighted score distillation loss. Extensive experiments on both synthetic and real videos show that Efficient4D offers a remarkable 10-fold increase in speed when compared to prior art alternatives while preserving the quality of novel view synthesis. For example, Efficient4D takes only 10 minutes to model a dynamic object, vs 120 minutes by the previous art model Consistent4D.

Zeyu Yang、Li Zhang、Zijie Pan、Xiatian Zhu

计算技术、计算机技术

Zeyu Yang,Li Zhang,Zijie Pan,Xiatian Zhu.Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video[EB/OL].(2024-01-16)[2025-08-02].https://arxiv.org/abs/2401.08742.点此复制

评论