|国家预印本平台
首页|Text-to-3D Generation by 2D Editing

Text-to-3D Generation by 2D Editing

Text-to-3D Generation by 2D Editing

来源:Arxiv_logoArxiv
英文摘要

Distilling 3D representations from pretrained 2D diffusion models is essential for 3D creative applications across gaming, film, and interior design. Current SDS-based methods are hindered by inefficient information distillation from diffusion models, which prevents the creation of photorealistic 3D contents. In this paper, we first reevaluate the SDS approach by analyzing its fundamental nature as a basic image editing process that commonly results in over-saturation, over-smoothing, lack of rich content and diversity due to the poor-quality single-step denoising. In light of this, we then propose a novel method called 3D Generation by Editing (GE3D). Each iteration of GE3D utilizes a 2D editing framework that combines a noising trajectory to preserve the information of the input image, alongside a text-guided denoising trajectory. We optimize the process by aligning the latents across both trajectories. This approach fully exploits pretrained diffusion models to distill multi-granularity information through multiple denoising steps, resulting in photorealistic 3D outputs. Both theoretical and experimental results confirm the effectiveness of our approach, which not only advances 3D generation technology but also establishes a novel connection between 3D generation and 2D editing. This could potentially inspire further research in the field. Code and demos are released at https://jahnsonblack.github.io/GE3D/.

Lin Wang、Yuyang Wang、Yuli Tian、Yong Liao、Yonghui Wang、Haoran Li、Peng Yuan Zhou

计算技术、计算机技术

Lin Wang,Yuyang Wang,Yuli Tian,Yong Liao,Yonghui Wang,Haoran Li,Peng Yuan Zhou.Text-to-3D Generation by 2D Editing[EB/OL].(2024-12-08)[2025-05-17].https://arxiv.org/abs/2412.05929.点此复制

评论