|国家预印本平台
首页|MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation

MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation

MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation

来源:Arxiv_logoArxiv
英文摘要

Recent advances in Large Language Models (LLMs) have shown promising results in complex reasoning tasks. However, current evaluations predominantly focus on single-turn reasoning scenarios, leaving interactive tasks largely unexplored. We attribute it to the absence of comprehensive datasets and scalable automatic evaluation protocols. To fill these gaps, we present MTR-Bench for LLMs' Multi-Turn Reasoning evaluation. Comprising 4 classes, 40 tasks, and 3600 instances, MTR-Bench covers diverse reasoning capabilities, fine-grained difficulty granularity, and necessitates multi-turn interactions with the environments. Moreover, MTR-Bench features fully-automated framework spanning both dataset constructions and model evaluations, which enables scalable assessment without human interventions. Extensive experiments reveal that even the cutting-edge reasoning models fall short of multi-turn, interactive reasoning tasks. And the further analysis upon these results brings valuable insights for future research in interactive AI systems.

Xiaoyuan Li、Keqin Bao、Yubo Ma、Moxin Li、Wenjie Wang、Rui Men、Yichang Zhang、Fuli Feng、Dayiheng Liu、Junyang Lin

计算技术、计算机技术

Xiaoyuan Li,Keqin Bao,Yubo Ma,Moxin Li,Wenjie Wang,Rui Men,Yichang Zhang,Fuli Feng,Dayiheng Liu,Junyang Lin.MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation[EB/OL].(2025-05-21)[2025-07-16].https://arxiv.org/abs/2505.17123.点此复制

评论