mSTEB: Massively Multilingual Evaluation of LLMs on Speech and Text Tasks
mSTEB: Massively Multilingual Evaluation of LLMs on Speech and Text Tasks
Large Language models (LLMs) have demonstrated impressive performance on a wide range of tasks, including in multimodal settings such as speech. However, their evaluation is often limited to English and a few high-resource languages. For low-resource languages, there is no standardized evaluation benchmark. In this paper, we address this gap by introducing mSTEB, a new benchmark to evaluate the performance of LLMs on a wide range of tasks covering language identification, text classification, question answering, and translation tasks on both speech and text modalities. We evaluated the performance of leading LLMs such as Gemini 2.0 Flash and GPT-4o (Audio) and state-of-the-art open models such as Qwen 2 Audio and Gemma 3 27B. Our evaluation shows a wide gap in performance between high-resource and low-resource languages, especially for languages spoken in Africa and Americas/Oceania. Our findings show that more investment is needed to address their under-representation in LLMs coverage.
Luel Hagos Beyene、Vivek Verma、Min Ma、Jesujoba O. Alabi、Fabian David Schmidt、Joyce Nakatumba-Nabende、David Ifeoluwa Adelani
语言学非洲诸语言美洲诸语言大洋洲诸语言计算技术、计算机技术
Luel Hagos Beyene,Vivek Verma,Min Ma,Jesujoba O. Alabi,Fabian David Schmidt,Joyce Nakatumba-Nabende,David Ifeoluwa Adelani.mSTEB: Massively Multilingual Evaluation of LLMs on Speech and Text Tasks[EB/OL].(2025-06-25)[2025-07-25].https://arxiv.org/abs/2506.08400.点此复制
评论