|国家预印本平台
首页|Mis-prompt: Benchmarking Large Language Models for Proactive Error Handling

Mis-prompt: Benchmarking Large Language Models for Proactive Error Handling

Mis-prompt: Benchmarking Large Language Models for Proactive Error Handling

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have demonstrated significant advancements in error handling. Current error-handling works are performed in a passive manner, with explicit error-handling instructions. However, in real-world scenarios, explicit error-handling instructions are usually unavailable. In this paper, our work identifies this challenge as how to conduct proactive error handling without explicit error handling instructions. To promote further research, this work introduces a new benchmark, termed Mis-prompt, consisting of four evaluation tasks, an error category taxonomy, and a new evaluation dataset. Furthermore, this work analyzes current LLMs' performance on the benchmark, and the experimental results reveal that current LLMs show poor performance on proactive error handling, and SFT on error handling instances improves LLMs' proactive error handling capabilities. The dataset will be publicly available.

Jiayi Zeng、Yizhe Feng、Mengliang He、Wenhui Lei、Wei Zhang、Zeming Liu、Xiaoming Shi、Aimin Zhou

计算技术、计算机技术

Jiayi Zeng,Yizhe Feng,Mengliang He,Wenhui Lei,Wei Zhang,Zeming Liu,Xiaoming Shi,Aimin Zhou.Mis-prompt: Benchmarking Large Language Models for Proactive Error Handling[EB/OL].(2025-05-29)[2025-06-21].https://arxiv.org/abs/2506.00064.点此复制

评论