|国家预印本平台
首页|Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation

Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation

Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation

来源:Arxiv_logoArxiv
英文摘要

While humans naturally learn and adapt from past experiences, large language models (LLMs) and their agentic counterparts struggle to retain reasoning from previous tasks and apply them in future contexts. To address this limitation, we propose a novel framework, log-augmented generation (LAG) that directly reuses prior computation and reasoning from past logs at test time to enhance model's ability to learn from previous tasks and perform better on new, unseen challenges, all while keeping the system efficient and scalable. Specifically, our system represents task logs using key-value (KV) caches, encoding the full reasoning context of prior tasks while storing KV caches for only a selected subset of tokens. When a new task arises, LAG retrieves the KV values from relevant logs to augment generation. Our approach differs from reflection-based memory mechanisms by directly reusing prior reasoning and computations without requiring additional steps for knowledge extraction or distillation. Our method also goes beyond existing KV caching techniques, which primarily target efficiency gains rather than improving accuracy. Experiments on knowledge- and reasoning-intensive datasets demonstrate that our method significantly outperforms standard agentic systems that do not utilize logs, as well as existing solutions based on reflection and KV cache techniques.

Peter Baile Chen、Yi Zhang、Dan Roth、Samuel Madden、Jacob Andreas、Michael Cafarella

计算技术、计算机技术

Peter Baile Chen,Yi Zhang,Dan Roth,Samuel Madden,Jacob Andreas,Michael Cafarella.Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation[EB/OL].(2025-05-20)[2025-06-28].https://arxiv.org/abs/2505.14398.点此复制

评论