|国家预印本平台
首页|Benchmarking Failures in Tool-Augmented Language Models

Benchmarking Failures in Tool-Augmented Language Models

Benchmarking Failures in Tool-Augmented Language Models

来源:Arxiv_logoArxiv
英文摘要

The integration of tools has extended the capabilities of language models (LMs) beyond vanilla text generation to versatile scenarios. However, tool-augmented language models (TaLMs) often assume 'perfect' information access and tool availability, which may not hold in the real world. To systematically study TaLMs' imperfections, we introduce the FAIL-TALMS benchmark, featuring two major failures: under-specified user queries and non-available tools. FAIL-TALMS contains 1,749 examples using 906 tools across 21 categories, including single- and multi-tool usage. We evaluate top-performing proprietary and open-source models, and find all current models except for Claude struggle to recognize missing tools or information. Further, to study possible mitigation of the failures, we enable real-time human interaction, named the Ask-and-Help (AAH) method, to provide missing information or replace non-functional tools. While AAH can help models solve tasks more correctly when queries are under-specified, it brings minimal benefit when complex tools are broken.

Eduardo Trevi?o、Hugo Contant、Graham Neubig、Zora Zhiruo Wang、James Ngai

计算技术、计算机技术

Eduardo Trevi?o,Hugo Contant,Graham Neubig,Zora Zhiruo Wang,James Ngai.Benchmarking Failures in Tool-Augmented Language Models[EB/OL].(2025-03-18)[2025-05-02].https://arxiv.org/abs/2503.14227.点此复制

评论