MLAAD: The Multi-Language Audio Anti-Spoofing Dataset
MLAAD: The Multi-Language Audio Anti-Spoofing Dataset
Text-to-Speech (TTS) technology offers notable benefits, such as providing a voice for individuals with speech impairments, but it also facilitates the creation of audio deepfakes and spoofing attacks. AI-based detection methods can help mitigate these risks; however, the performance of such models is inherently dependent on the quality and diversity of their training data. Presently, the available datasets are heavily skewed towards English and Chinese audio, which limits the global applicability of these anti-spoofing systems. To address this limitation, this paper presents the Multi-Language Audio Anti-Spoofing Dataset (MLAAD), version 7, created using 101 TTS models, comprising 52 different architectures, to generate 485.3 hours of synthetic voice in 40 different languages. We train and evaluate three state-of-the-art deepfake detection models with MLAAD and observe that it demonstrates superior performance over comparable datasets like InTheWild and Fake-Or-Real when used as a training resource. Moreover, compared to the renowned ASVspoof 2019 dataset, MLAAD proves to be a complementary resource. In tests across eight datasets, MLAAD and ASVspoof 2019 alternately outperformed each other, each excelling on four datasets. By publishing MLAAD and making a trained model accessible via an interactive webserver, we aim to democratize anti-spoofing technology, making it accessible beyond the realm of specialists, and contributing to global efforts against audio spoofing and deepfakes.
Piotr Syga、Konstantin Böttinger、Philip Sperl、Edresson Casanova、Eren Gölge、Thorsten Müller、Nicolas M. Müller、Piotr Kawa、Wei Herng Choong
语言学
Piotr Syga,Konstantin Böttinger,Philip Sperl,Edresson Casanova,Eren Gölge,Thorsten Müller,Nicolas M. Müller,Piotr Kawa,Wei Herng Choong.MLAAD: The Multi-Language Audio Anti-Spoofing Dataset[EB/OL].(2025-07-11)[2025-07-23].https://arxiv.org/abs/2401.09512.点此复制
评论