|国家预印本平台
首页|Unlocking Compositional Control: Self-Supervision for LVLM-Based Image Generation

Unlocking Compositional Control: Self-Supervision for LVLM-Based Image Generation

Unlocking Compositional Control: Self-Supervision for LVLM-Based Image Generation

来源:Arxiv_logoArxiv
英文摘要

This paper introduces Hierarchical Self-Supervised LVLM (Hi-SSLVLM), a novel generative model designed to significantly advance text-to-image synthesis, particularly for complex and compositionally challenging prompts. Traditional methods often grapple with the high cost of meticulously curated paired image-text datasets and struggle with precise control over fine-grained visual attributes and intricate spatial relationships. Our Hi-SSLVLM addresses these limitations through a unique two-stage self-supervised learning strategy. The first stage, Multi-Granularity Visual-Language Grounding, enables the Large Vision-Language Model (LVLM) backbone to autonomously generate and align hierarchical captions (global and local) to images, cultivating a deep internal semantic understanding without reliance on extensive human annotation. The second stage, Self-Refinement and Guided Image Generation, leverages this acquired knowledge by an Internal Compositional Planning (ICP) mechanism, where the LVLM first formulates detailed textual sub-prompts to guide the image generation process, complemented by a novel Semantic Consistency Loss for precise output alignment. Comprehensive experiments against leading baselines, including Janus-Pro-1B, Stable Diffusion XL 1.0, DeepFloyd IF v1.0, and ControlNet-XL, on multi-dimensional benchmarks such as Gemini-2.0-Flash and InternVL3-78B, demonstrate Hi-SSLVLM's superior performance across all fine-grained metrics. An in-depth ablation study confirms the critical role of each proposed component. Furthermore, human evaluations corroborate our quantitative findings, highlighting Hi-SSLVLM's enhanced fidelity to prompt, compositional accuracy, and overall aesthetic quality, marking a significant step towards more controllable and semantically consistent open-ended text-to-image generation.

Fernando Gabriela Garcia、Spencer Burns、Ryan Shaw、Hunter Young

计算技术、计算机技术

Fernando Gabriela Garcia,Spencer Burns,Ryan Shaw,Hunter Young.Unlocking Compositional Control: Self-Supervision for LVLM-Based Image Generation[EB/OL].(2025-07-05)[2025-07-25].https://arxiv.org/abs/2507.04151.点此复制

评论