|国家预印本平台
首页|Self-Challenging Language Model Agents

Self-Challenging Language Model Agents

Self-Challenging Language Model Agents

来源:Arxiv_logoArxiv
英文摘要

Large language models are quickly becoming the foundation for intelligent agents that are capable of using tools. However, training such agents is challenging because it requires human creation and annotation of a diverse set of tasks, tools, and evaluation criteria. In this paper, we propose the Self-Challenging framework for training an agent on high-quality tasks that are generated by itself. The agent first plays the role of challenger and generates a task after interacting with the given tools. The tasks take the form of a novel general class of problems termed Code-as-Task, which are defined by an instruction, a verification function and solution and failure cases which serve as tests, allowing to filter only for high-quality tasks. The agent then takes an executor role and trains on those tasks with reinforcement learning using the evaluation feedback as a reward. Evaluation on two existing multi-turn tool-use agent benchmarks, M3ToolEval and TauBench, shows the Self-Challenging framework achieves over a two-fold improvement in Llama-3.1-8B-Instruct, despite using only self-generated training data.

Yifei Zhou、Sergey Levine、Jason Weston、Xian Li、Sainbayar Sukhbaatar

计算技术、计算机技术

Yifei Zhou,Sergey Levine,Jason Weston,Xian Li,Sainbayar Sukhbaatar.Self-Challenging Language Model Agents[EB/OL].(2025-06-02)[2025-07-23].https://arxiv.org/abs/2506.01716.点此复制

评论