Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models
Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models
In this work, we introduce the task of singing voice deepfake source attribution (SVDSA). We hypothesize that multimodal foundation models (MMFMs) such as ImageBind, LanguageBind will be most effective for SVDSA as they are better equipped for capturing subtle source-specific characteristics-such as unique timbre, pitch manipulation, or synthesis artifacts of each singing voice deepfake source due to their cross-modality pre-training. Our experiments with MMFMs, speech foundation models and music foundation models verify the hypothesis that MMFMs are the most effective for SVDSA. Furthermore, inspired from related research, we also explore fusion of foundation models (FMs) for improved SVDSA. To this end, we propose a novel framework, COFFE which employs Chernoff Distance as novel loss function for effective fusion of FMs. Through COFFE with the symphony of MMFMs, we attain the topmost performance in comparison to all the individual FMs and baseline fusion methods.
Orchid Chetia Phukan、Girish、Mohd Mujtaba Akhtar、Swarup Ranjan Behera、Priyabrata Mallick、Pailla Balakrishna Reddy、Arun Balaji Buduru、Rajesh Sharma
计算技术、计算机技术
Orchid Chetia Phukan,Girish,Mohd Mujtaba Akhtar,Swarup Ranjan Behera,Priyabrata Mallick,Pailla Balakrishna Reddy,Arun Balaji Buduru,Rajesh Sharma.Towards Source Attribution of Singing Voice Deepfake with Multimodal Foundation Models[EB/OL].(2025-06-03)[2025-06-27].https://arxiv.org/abs/2506.03364.点此复制
评论