|国家预印本平台
首页|Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs

Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs

Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs

来源:Arxiv_logoArxiv
英文摘要

Audio Large Language Models (AudioLLMs) have achieved strong results in semantic tasks like speech recognition and translation, but remain limited in modeling paralinguistic cues such as emotion. Existing approaches often treat emotion understanding as a classification problem, offering little insight into the underlying rationale behind predictions. In this work, we explore emotion reasoning, a strategy that leverages the generative capabilities of AudioLLMs to enhance emotion recognition by producing semantically aligned, evidence-grounded explanations. To support this in multitask AudioLLMs, we introduce a unified framework combining reasoning-augmented data supervision, dual-encoder architecture, and task-alternating training. This approach enables AudioLLMs to effectively learn different tasks while incorporating emotional reasoning. Experiments on IEMOCAP and MELD show that our approach not only improves emotion prediction accuracy but also enhances the coherence and evidential grounding of the generated responses.

Geyu Lin、Zhuohan Liu、Wenyu Zhang、Yingxu He、Shuo Sun、Bin Wang、Xunlong Zou、Jeremy H. M. Wong、Qiongqiong Wang、Hardik B. Sailor、Nancy F. Chen、Ai Ti Aw

计算技术、计算机技术

Geyu Lin,Zhuohan Liu,Wenyu Zhang,Yingxu He,Shuo Sun,Bin Wang,Xunlong Zou,Jeremy H. M. Wong,Qiongqiong Wang,Hardik B. Sailor,Nancy F. Chen,Ai Ti Aw.Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs[EB/OL].(2025-06-07)[2025-06-22].https://arxiv.org/abs/2506.06820.点此复制

评论