|国家预印本平台
首页|When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework

When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework

When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework

来源:Arxiv_logoArxiv
英文摘要

We investigate the challenge of applying Large Language Models (LLMs) to long texts. We propose a theoretical framework that distinguishes the failure modes of long context tasks into three categories: cross-chunk dependence (task noise), confusion that grows with context size (model noise), and the imperfect integration of partial results (aggregator noise). Under this view, we analyze when it is effective to use multi-agent chunking, i.e., dividing a length sequence into smaller chunks and aggregating the processed results of each chunk. Our experiments on tasks such as retrieval, question answering, and summarization confirm both the theoretical analysis and the conditions that favor multi-agent chunking. By exploring superlinear model noise growth with input length, we also explain why, for large inputs, a weaker model configured with chunk-based processing can surpass a more advanced model like GPT4o applied in a single shot. Overall, we present a principled understanding framework and our results highlight a direct pathway to handling long contexts in LLMs with carefully managed chunking and aggregator strategies.

Zhen Xu、Shang Zhu、Jue Wang、Junlin Wang、Ben Athiwaratkun、Chi Wang、James Zou、Ce Zhang

计算技术、计算机技术

Zhen Xu,Shang Zhu,Jue Wang,Junlin Wang,Ben Athiwaratkun,Chi Wang,James Zou,Ce Zhang.When Does Divide and Conquer Work for Long Context LLM? A Noise Decomposition Framework[EB/OL].(2025-06-19)[2025-06-30].https://arxiv.org/abs/2506.16411.点此复制

评论