|国家预印本平台
首页|CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step

CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step

CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step

来源:Arxiv_logoArxiv
英文摘要

Current text-to-image (T2I) generation models struggle to align spatial composition with the input text, especially in complex scenes. Even layout-based approaches yield suboptimal spatial control, as their generation process is decoupled from layout planning, making it difficult to refine the layout during synthesis. We present CoT-Diff, a framework that brings step-by-step CoT-style reasoning into T2I generation by tightly integrating Multimodal Large Language Model (MLLM)-driven 3D layout planning with the diffusion process. CoT-Diff enables layout-aware reasoning inline within a single diffusion round: at each denoising step, the MLLM evaluates intermediate predictions, dynamically updates the 3D scene layout, and continuously guides the generation process. The updated layout is converted into semantic conditions and depth maps, which are fused into the diffusion model via a condition-aware attention mechanism, enabling precise spatial control and semantic injection. Experiments on 3D Scene benchmarks show that CoT-Diff significantly improves spatial alignment and compositional fidelity, and outperforms the state-of-the-art method by 34.7% in complex scene spatial accuracy, thereby validating the effectiveness of this entangled generation paradigm.

Zheyuan Liu、Munan Ning、Qihui Zhang、Shuo Yang、Zhongrui Wang、Yiwei Yang、Xianzhe Xu、Yibing Song、Weihua Chen、Fan Wang、Li Yuan

计算技术、计算机技术

Zheyuan Liu,Munan Ning,Qihui Zhang,Shuo Yang,Zhongrui Wang,Yiwei Yang,Xianzhe Xu,Yibing Song,Weihua Chen,Fan Wang,Li Yuan.CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step[EB/OL].(2025-07-06)[2025-07-22].https://arxiv.org/abs/2507.04451.点此复制

评论