Token-Budget-Aware LLM Reasoning
Token-Budget-Aware LLM Reasoning
Reasoning is critical for large language models (LLMs) to excel in a wide range of tasks. While methods like Chain-of-Thought (CoT) reasoning and enhance LLM performance by decomposing problems into intermediate steps, they also incur significant overhead in token usage, leading to increased costs. We find that the reasoning process of current LLMs is unnecessarily lengthy and it can be compressed by including a reasonable token budget in the prompt, but the choice of token budget plays a crucial role in the actual compression effectiveness. We then propose a token-budget-aware LLM reasoning framework that dynamically adjusts the number of reasoning tokens based on the reasoning complexity of each problem. Experiments show that our method effectively reduces token costs in CoT reasoning with only a slight performance reduction, offering a practical solution to balance efficiency and accuracy in LLM reasoning. Code: https://github.com/GeniusHTX/TALE
Tingxu Han、Zhenting Wang、Chunrong Fang、Shiyu Zhao、Shiqing Ma、Zhenyu Chen
计算技术、计算机技术
Tingxu Han,Zhenting Wang,Chunrong Fang,Shiyu Zhao,Shiqing Ma,Zhenyu Chen.Token-Budget-Aware LLM Reasoning[EB/OL].(2024-12-24)[2025-07-20].https://arxiv.org/abs/2412.18547.点此复制
评论