Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning
Multimodal large language models (MLLMs) have made significant progress in document understanding. However, the information-dense nature of document images still poses challenges, as most queries depend on only a few relevant regions, with the rest being redundant. Existing one-pass MLLMs process entire document images without considering query relevance, often failing to focus on critical regions and producing unfaithful responses. Inspired by the human coarse-to-fine reading pattern, we introduce Doc-CoB (Chain-of-Box), a simple-yet-effective mechanism that integrates human-style visual reasoning into MLLM without modifying its architecture. Our method allows the model to autonomously select the set of regions (boxes) most relevant to the query, and then focus attention on them for further understanding. We first design a fully automatic pipeline, integrating a commercial MLLM with a layout analyzer, to generate 249k training samples with intermediate visual reasoning supervision. Then we incorporate two enabling tasks that improve box identification and box-query reasoning, which together enhance document understanding. Extensive experiments on seven benchmarks with four popular models show that Doc-CoB significantly improves performance, demonstrating its effectiveness and wide applicability. All code, data, and models will be released publicly.
Ye Mo、Zirui Shao、Kai Ye、Xianwei Mao、Bo Zhang、Hangdi Xing、Peng Ye、Gang Huang、Kehan Chen、Zhou Huan、Zixu Yan、Sheng Zhou
计算技术、计算机技术
Ye Mo,Zirui Shao,Kai Ye,Xianwei Mao,Bo Zhang,Hangdi Xing,Peng Ye,Gang Huang,Kehan Chen,Zhou Huan,Zixu Yan,Sheng Zhou.Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning[EB/OL].(2025-05-24)[2025-06-14].https://arxiv.org/abs/2505.18603.点此复制
评论