|国家预印本平台
首页|PointT2I: LLM-based text-to-image generation via keypoints

PointT2I: LLM-based text-to-image generation via keypoints

PointT2I: LLM-based text-to-image generation via keypoints

来源:Arxiv_logoArxiv
英文摘要

Text-to-image (T2I) generation model has made significant advancements, resulting in high-quality images aligned with an input prompt. However, despite T2I generation's ability to generate fine-grained images, it still faces challenges in accurately generating images when the input prompt contains complex concepts, especially human pose. In this paper, we propose PointT2I, a framework that effectively generates images that accurately correspond to the human pose described in the prompt by using a large language model (LLM). PointT2I consists of three components: Keypoint generation, Image generation, and Feedback system. The keypoint generation uses an LLM to directly generate keypoints corresponding to a human pose, solely based on the input prompt, without external references. Subsequently, the image generation produces images based on both the text prompt and the generated keypoints to accurately reflect the target pose. To refine the outputs of the preceding stages, we incorporate an LLM-based feedback system that assesses the semantic consistency between the generated contents and the given prompts. Our framework is the first approach to leveraging LLM for keypoints-guided image generation without any fine-tuning, producing accurate pose-aligned images based solely on textual prompts.

Taekyung Lee、Donggyu Lee、Myungjoo Kang

计算技术、计算机技术

Taekyung Lee,Donggyu Lee,Myungjoo Kang.PointT2I: LLM-based text-to-image generation via keypoints[EB/OL].(2025-06-02)[2025-07-23].https://arxiv.org/abs/2506.01370.点此复制

评论