|国家预印本平台
首页|Emergence of Text Readability in Vision Language Models

Emergence of Text Readability in Vision Language Models

Emergence of Text Readability in Vision Language Models

来源:Arxiv_logoArxiv
英文摘要

We investigate how the ability to recognize textual content within images emerges during the training of Vision-Language Models (VLMs). Our analysis reveals a critical phenomenon: the ability to read textual information in a given image \textbf{(text readability)} emerges abruptly after substantial training iterations, in contrast to semantic content understanding which develops gradually from the early stages of training. This delayed emergence may reflect how contrastive learning tends to initially prioritize general semantic understanding, with text-specific symbolic processing developing later. Interestingly, the ability to match images with rendered text develops even slower, indicating a deeper need for semantic integration. These findings highlight the need for tailored training strategies to accelerate robust text comprehension in VLMs, laying the groundwork for future research on optimizing multimodal learning.

Jaeyoo Park、Sanghyuk Chun、Wonjae Kim、Sangdoo Yun、Bohyung Han

计算技术、计算机技术

Jaeyoo Park,Sanghyuk Chun,Wonjae Kim,Sangdoo Yun,Bohyung Han.Emergence of Text Readability in Vision Language Models[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2506.19389.点此复制

评论