|国家预印本平台
首页|WhisQ: Cross-Modal Representation Learning for Text-to-Music MOS Prediction

WhisQ: Cross-Modal Representation Learning for Text-to-Music MOS Prediction

WhisQ: Cross-Modal Representation Learning for Text-to-Music MOS Prediction

来源:Arxiv_logoArxiv
英文摘要

Mean Opinion Score (MOS) prediction for text to music systems requires evaluating both overall musical quality and text prompt alignment. This paper introduces WhisQ, a multimodal architecture that addresses this dual-assessment challenge through sequence level co-attention and optimal transport regularization. WhisQ employs the Whisper Base pretrained model for temporal audio encoding and Qwen 3, a 0.6B Small Language Model (SLM), for text encoding, with both maintaining sequence structure for fine grained cross-modal modeling. The architecture features specialized prediction pathways: OMQ is predicted from pooled audio embeddings, while TA leverages bidirectional sequence co-attention between audio and text. Sinkhorn optimal transport loss further enforce semantic alignment in the shared embedding space. On the MusicEval Track-1 dataset, WhisQ achieves substantial improvements over the baseline: 7% improvement in Spearman correlation for OMQ and 14% for TA. Ablation studies reveal that optimal transport regularization provides the largest performance gain (10% SRCC improvement), demonstrating the importance of explicit cross-modal alignment for text-to-music evaluation.

Jakaria Islam Emon、Kazi Tamanna Alam、Md. Abu Salek

计算技术、计算机技术

Jakaria Islam Emon,Kazi Tamanna Alam,Md. Abu Salek.WhisQ: Cross-Modal Representation Learning for Text-to-Music MOS Prediction[EB/OL].(2025-06-06)[2025-06-21].https://arxiv.org/abs/2506.05899.点此复制

评论