Source Verification for Speech Deepfakes
Source Verification for Speech Deepfakes
With the proliferation of speech deepfake generators, it becomes crucial not only to assess the authenticity of synthetic audio but also to trace its origin. While source attribution models attempt to address this challenge, they often struggle in open-set conditions against unseen generators. In this paper, we introduce the source verification task, which, inspired by speaker verification, determines whether a test track was produced using the same model as a set of reference signals. Our approach leverages embeddings from a classifier trained for source attribution, computing distance scores between tracks to assess whether they originate from the same source. We evaluate multiple models across diverse scenarios, analyzing the impact of speaker diversity, language mismatch, and post-processing operations. This work provides the first exploration of source verification, highlighting its potential and vulnerabilities, and offers insights for real-world forensic applications.
Viola Negroni、Davide Salvi、Paolo Bestagini、Stefano Tubaro
计算技术、计算机技术
Viola Negroni,Davide Salvi,Paolo Bestagini,Stefano Tubaro.Source Verification for Speech Deepfakes[EB/OL].(2025-05-20)[2025-06-22].https://arxiv.org/abs/2505.14188.点此复制
评论