Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 2025
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 2025
Training SER models in natural, spontaneous speech is especially challenging due to the subtle expression of emotions and the unpredictable nature of real-world audio. In this paper, we present a robust system for the INTERSPEECH 2025 Speech Emotion Recognition in Naturalistic Conditions Challenge, focusing on categorical emotion recognition. Our method combines state-of-the-art audio models with text features enriched by prosodic and spectral cues. In particular, we investigate the effectiveness of Fundamental Frequency (F0) quantization and the use of a pretrained audio tagging model. We also employ an ensemble model to improve robustness. On the official test set, our system achieved a Macro F1-score of 39.79% (42.20% on validation). Our results underscore the potential of these methods, and analysis of fusion techniques confirmed the effectiveness of Graph Attention Networks. Our source code is publicly available.
Alef Iury Siqueira Ferreira、Lucas Rafael Gris、Alexandre Ferro Filho、Lucas ólives、Daniel Ribeiro、Luiz Fernando、Fernanda Lustosa、Rodrigo Tanaka、Frederico Santos de Oliveira、Arlindo Galv?o Filho
计算技术、计算机技术
Alef Iury Siqueira Ferreira,Lucas Rafael Gris,Alexandre Ferro Filho,Lucas ólives,Daniel Ribeiro,Luiz Fernando,Fernanda Lustosa,Rodrigo Tanaka,Frederico Santos de Oliveira,Arlindo Galv?o Filho.Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 2025[EB/OL].(2025-06-02)[2025-06-24].https://arxiv.org/abs/2506.02088.点此复制
评论