|国家预印本平台
首页|Structured Agent Distillation for Large Language Model

Structured Agent Distillation for Large Language Model

Structured Agent Distillation for Large Language Model

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) exhibit strong capabilities as decision-making agents by interleaving reasoning and actions, as seen in ReAct-style frameworks. Yet, their practical deployment is constrained by high inference costs and large model sizes. We propose Structured Agent Distillation, a framework that compresses large LLM-based agents into smaller student models while preserving both reasoning fidelity and action consistency. Unlike standard token-level distillation, our method segments trajectories into {[REASON]} and {[ACT]} spans, applying segment-specific losses to align each component with the teacher's behavior. This structure-aware supervision enables compact agents to better replicate the teacher's decision process. Experiments on ALFWorld, HotPotQA-ReAct, and WebShop show that our approach consistently outperforms token-level and imitation learning baselines, achieving significant compression with minimal performance drop. Scaling and ablation results further highlight the importance of span-level alignment for efficient and deployable agents.

Jun Liu、Zhenglun Kong、Peiyan Dong、Changdi Yang、Tianqi Li、Hao Tang、Geng Yuan、Wei Niu、Wenbin Zhang、Pu Zhao、Xue Lin、Dong Huang、Yanzhi Wang

计算技术、计算机技术

Jun Liu,Zhenglun Kong,Peiyan Dong,Changdi Yang,Tianqi Li,Hao Tang,Geng Yuan,Wei Niu,Wenbin Zhang,Pu Zhao,Xue Lin,Dong Huang,Yanzhi Wang.Structured Agent Distillation for Large Language Model[EB/OL].(2025-05-19)[2025-06-25].https://arxiv.org/abs/2505.13820.点此复制

评论