ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
In recent advancements in audio self-supervised representation learning, the standard Transformer architecture has emerged as the predominant approach, yet its attention mechanism often allocates a portion of attention weights to irrelevant information, potentially impairing the model's discriminative ability. To address this, we introduce a differential attention mechanism, which effectively mitigates ineffective attention allocation through the integration of dual-softmax operations and appropriately tuned differential coefficients. Experimental results demonstrate that our ASDA model achieves state-of-the-art (SOTA) performance across multiple benchmarks, including audio classification (49.0% mAP on AS-2M, 41.5% mAP on AS20K), keyword spotting (98.3% accuracy on SPC-2), and environmental sound classification (96.1% accuracy on ESC-50). These results highlight ASDA's effectiveness in audio tasks, paving the way for broader applications.
Junyu Wang、Tianrui Wang、Meng Ge、Longbiao Wang、Jianwu Dang
信息科学、信息技术计算技术、计算机技术
Junyu Wang,Tianrui Wang,Meng Ge,Longbiao Wang,Jianwu Dang.ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning[EB/OL].(2025-07-03)[2025-07-16].https://arxiv.org/abs/2507.02666.点此复制
评论