|国家预印本平台
首页|Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition

Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition

Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition

来源:Arxiv_logoArxiv
英文摘要

Speech Emotion Recognition (SER) is crucial for improving human-computer interaction. Despite strides in monolingual SER, extending them to build a multilingual system remains challenging. Our goal is to train a single model capable of multilingual SER by distilling knowledge from multiple teacher models. To address this, we introduce a novel language-aware multi-teacher knowledge distillation method to advance SER in English, Finnish, and French. It leverages Wav2Vec2.0 as the foundation of monolingual teacher models and then distills their knowledge into a single multilingual student model. The student model demonstrates state-of-the-art performance, with a weighted recall of 72.9 on the English dataset and an unweighted recall of 63.4 on the Finnish dataset, surpassing fine-tuning and knowledge distillation baselines. Our method excels in improving recall for sad and neutral emotions, although it still faces challenges in recognizing anger and happiness.

Mehedi Hasan Bijoy、Dejan Porjazovski、Tamás Grósz、Mikko Kurimo

印欧语系乌拉尔语系(芬兰-乌戈尔语系)

Mehedi Hasan Bijoy,Dejan Porjazovski,Tamás Grósz,Mikko Kurimo.Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition[EB/OL].(2025-06-10)[2025-06-23].https://arxiv.org/abs/2506.08717.点此复制

评论