|国家预印本平台
首页|ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning

ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning

ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning

来源:Arxiv_logoArxiv
英文摘要

The field of controllable image generation has seen significant advancements, with various architectures improving generation layout consistency with control signals. However, contemporary methods still face challenges in bridging the semantic gap between input text prompts with sparse semantics and the target images, often over-relying on low-level control signals to infer regional details. To address this challenge, we propose ControlThinker, a novel framework that employs a "comprehend-then-generate" paradigm. Firstly, by incentivizing the visual reasoning capability of a MLLM, latent semantics from control images are mined to enrich text prompts. This enriched semantic understanding then seamlessly aids in image generation without the need for additional complex modifications. To further tackle the uncertainty arising from the ambiguity of control images, we encourage broader exploration of reasoning trajectories and select the optimal one using a metric-based output reward model (ORM). Extensive experimental results demonstrate that ControlThinker effectively mitigates the semantic gap between raw text prompts and target images, resulting in improved visual quality and semantic consistency across a wide range of benchmarks. The code and models are available at https://github.com/Maplebb/ControlThinker.

Feng Han、Yang Jiao、Shaoxiang Chen、Junhao Xu、Jingjing Chen、Yu-Gang Jiang

计算技术、计算机技术

Feng Han,Yang Jiao,Shaoxiang Chen,Junhao Xu,Jingjing Chen,Yu-Gang Jiang.ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning[EB/OL].(2025-06-04)[2025-06-22].https://arxiv.org/abs/2506.03596.点此复制

评论