|国家预印本平台
首页|M3SD: Multi-modal, Multi-scenario and Multi-language Speaker Diarization Dataset

M3SD: Multi-modal, Multi-scenario and Multi-language Speaker Diarization Dataset

M3SD: Multi-modal, Multi-scenario and Multi-language Speaker Diarization Dataset

来源:Arxiv_logoArxiv
英文摘要

In the field of speaker diarization, the development of technology is constrained by two problems: insufficient data resources and poor generalization ability of deep learning models. To address these two problems, firstly, we propose an automated method for constructing speaker diarization datasets, which generates more accurate pseudo-labels for massive data through the combination of audio and video. Relying on this method, we have released Multi-modal, Multi-scenario and Multi-language Speaker Diarization (M3SD) datasets. This dataset is derived from real network videos and is highly diverse. Our dataset and code have been open-sourced at https://huggingface.co/spaces/OldDragon/m3sd.

Shilong Wu

计算技术、计算机技术

Shilong Wu.M3SD: Multi-modal, Multi-scenario and Multi-language Speaker Diarization Dataset[EB/OL].(2025-06-28)[2025-07-16].https://arxiv.org/abs/2506.14427.点此复制

评论