LLMControl: Grounded Control of Text-to-Image Diffusion-based Synthesis with Multimodal LLMs
LLMControl: Grounded Control of Text-to-Image Diffusion-based Synthesis with Multimodal LLMs
Recent spatial control methods for text-to-image (T2I) diffusion models have shown compelling results. However, these methods still fail to precisely follow the control conditions and generate the corresponding images, especially when encountering the textual prompts that contain multiple objects or have complex spatial compositions. In this work, we present a LLM-guided framework called LLM\_Control to address the challenges of the controllable T2I generation task. By improving grounding capabilities, LLM\_Control is introduced to accurately modulate the pre-trained diffusion models, where visual conditions and textual prompts influence the structures and appearance generation in a complementary way. We utilize the multimodal LLM as a global controller to arrange spatial layouts, augment semantic descriptions and bind object attributes. The obtained control signals are injected into the denoising network to refocus and enhance attention maps according to novel sampling constraints. Extensive qualitative and quantitative experiments have demonstrated that LLM\_Control achieves competitive synthesis quality compared to other state-of-the-art methods across various pre-trained T2I models. It is noteworthy that LLM\_Control allows the challenging input conditions on which most of the existing methods
Jiaze Wang、Rui Chen、Haowang Cui
计算技术、计算机技术
Jiaze Wang,Rui Chen,Haowang Cui.LLMControl: Grounded Control of Text-to-Image Diffusion-based Synthesis with Multimodal LLMs[EB/OL].(2025-07-26)[2025-08-10].https://arxiv.org/abs/2507.19939.点此复制
评论