SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models
SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models
Large audio-language models (LALMs) have achieved near-human performance in sentence-level transcription and emotion recognition. However, existing evaluations focus mainly on surface-level perception, leaving the capacity of models for contextual and inference-driven reasoning in speech-based scenarios insufficiently examined. To address this gap, we introduce SpeechR, a unified benchmark for evaluating reasoning over speech in large audio-language models. SpeechR evaluates models along three key dimensions: factual retrieval, procedural inference, and normative judgment. It includes three distinct evaluation formats. The multiple-choice version measures answer selection accuracy. The generative version assesses the coherence and logical consistency of reasoning chains. The acoustic-feature version investigates whether variations in stress and emotion affect reasoning performance. Evaluations on eleven state-of-the-art LALMs reveal that high transcription accuracy does not translate into strong reasoning capabilities. SpeechR establishes a structured benchmark for evaluating reasoning in spoken language, enabling more targeted analysis of model capabilities across diverse dialogue-based tasks.
Wanqi Yang、Yanda Li、Yunchao Wei、Meng Fang、Ling Chen
计算技术、计算机技术
Wanqi Yang,Yanda Li,Yunchao Wei,Meng Fang,Ling Chen.SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models[EB/OL].(2025-08-04)[2025-08-26].https://arxiv.org/abs/2508.02018.点此复制
评论