Rethinking Stateful Tool Use in Multi-Turn Dialogues: Benchmarks and Challenges
Rethinking Stateful Tool Use in Multi-Turn Dialogues: Benchmarks and Challenges
Existing benchmarks that assess Language Models (LMs) as Language Agents (LAs) for tool use primarily focus on stateless, single-turn interactions or partial evaluations, such as tool selection in a single turn, overlooking the inherent stateful nature of interactions in multi-turn applications. To fulfill this gap, we propose \texttt{DialogTool}, a multi-turn dialogue dataset with stateful tool interactions considering the whole life cycle of tool use, across six key tasks in three stages: 1) \textit{tool creation}; 2) \textit{tool utilization}: tool awareness, tool selection, tool execution; and 3) \textit{role-consistent response}: response generation and role play. Furthermore, we build \texttt{VirtualMobile} -- an embodied virtual mobile evaluation environment to simulate API calls and assess the robustness of the created APIs\footnote{We will use tools and APIs alternatively, there are no significant differences between them in this paper.}. Taking advantage of these artifacts, we conduct comprehensive evaluation on 13 distinct open- and closed-source LLMs and provide detailed analysis at each stage, revealing that the existing state-of-the-art LLMs still cannot perform well to use tools over long horizons.
Hongru Wang、Wenyu Huang、Yufei Wang、Yuanhao Xi、Jianqiao Lu、Huan Zhang、Nan Hu、Zeming Liu、Jeff Z. Pan、Kam-Fai Wong
计算技术、计算机技术
Hongru Wang,Wenyu Huang,Yufei Wang,Yuanhao Xi,Jianqiao Lu,Huan Zhang,Nan Hu,Zeming Liu,Jeff Z. Pan,Kam-Fai Wong.Rethinking Stateful Tool Use in Multi-Turn Dialogues: Benchmarks and Challenges[EB/OL].(2025-05-19)[2025-06-10].https://arxiv.org/abs/2505.13328.点此复制
评论