BlenderFusion: 3D-Grounded Visual Editing and Generative Compositing
BlenderFusion: 3D-Grounded Visual Editing and Generative Compositing
We present BlenderFusion, a generative visual compositing framework that synthesizes new scenes by recomposing objects, camera, and background. It follows a layering-editing-compositing pipeline: (i) segmenting and converting visual inputs into editable 3D entities (layering), (ii) editing them in Blender with 3D-grounded control (editing), and (iii) fusing them into a coherent scene using a generative compositor (compositing). Our generative compositor extends a pre-trained diffusion model to process both the original (source) and edited (target) scenes in parallel. It is fine-tuned on video frames with two key training strategies: (i) source masking, enabling flexible modifications like background replacement; (ii) simulated object jittering, facilitating disentangled control over objects and camera. BlenderFusion significantly outperforms prior methods in complex compositional scene editing tasks.
Jiacheng Chen、Ramin Mehran、Xuhui Jia、Saining Xie、Sanghyun Woo
计算技术、计算机技术
Jiacheng Chen,Ramin Mehran,Xuhui Jia,Saining Xie,Sanghyun Woo.BlenderFusion: 3D-Grounded Visual Editing and Generative Compositing[EB/OL].(2025-06-26)[2025-07-16].https://arxiv.org/abs/2506.17450.点此复制
评论