EnvSDD: Benchmarking Environmental Sound Deepfake Detection
EnvSDD: Benchmarking Environmental Sound Deepfake Detection
Audio generation systems now create very realistic soundscapes that can enhance media production, but also pose potential risks. Several studies have examined deepfakes in speech or singing voice. However, environmental sounds have different characteristics, which may make methods for detecting speech and singing deepfakes less effective for real-world sounds. In addition, existing datasets for environmental sound deepfake detection are limited in scale and audio types. To address this gap, we introduce EnvSDD, the first large-scale curated dataset designed for this task, consisting of 45.25 hours of real and 316.74 hours of fake audio. The test set includes diverse conditions to evaluate the generalizability, such as unseen generation models and unseen datasets. We also propose an audio deepfake detection system, based on a pre-trained audio foundation model. Results on EnvSDD show that our proposed system outperforms the state-of-the-art systems from speech and singing domains.
Han Yin、Yang Xiao、Rohan Kumar Das、Jisheng Bai、Haohe Liu、Wenwu Wang、Mark D Plumbley
环境科学技术现状
Han Yin,Yang Xiao,Rohan Kumar Das,Jisheng Bai,Haohe Liu,Wenwu Wang,Mark D Plumbley.EnvSDD: Benchmarking Environmental Sound Deepfake Detection[EB/OL].(2025-05-25)[2025-07-17].https://arxiv.org/abs/2505.19203.点此复制
评论