Token Pruning in Audio Transformers: Optimizing Performance and Decoding Patch Importance
Token Pruning in Audio Transformers: Optimizing Performance and Decoding Patch Importance
Vision Transformers (ViTs) have achieved state-of-the-art performance across various computer vision tasks, but their high computational cost remains a challenge. Token pruning has been proposed to reduce this cost by selectively removing less important tokens. While effective in vision tasks by discarding non-object regions, applying this technique to audio tasks presents unique challenges, as distinguishing relevant from irrelevant regions in time-frequency representations is less straightforward. In this study, for the first time, we applied token pruning to ViT-based audio classification models using Mel-spectrograms and analyzed the trade-offs between model performance and computational cost: TopK token pruning can reduce MAC operations of AudioMAE and AST by 30-40%, with less than a 1% drop in classification accuracy. Our analysis reveals that while high-intensity tokens contribute significantly to model accuracy, low-intensity tokens remain important. In particular, they play a more critical role in general audio classification tasks than in speech-specific tasks.
Taehan Lee、Hyukjun Lee
计算技术、计算机技术
Taehan Lee,Hyukjun Lee.Token Pruning in Audio Transformers: Optimizing Performance and Decoding Patch Importance[EB/OL].(2025-04-02)[2025-07-02].https://arxiv.org/abs/2504.01690.点此复制
评论