Mettle: Meta-Token Learning for Memory-Efficient Audio-Visual Adaptation
Mettle: Meta-Token Learning for Memory-Efficient Audio-Visual Adaptation
We present \textbf{Met}a-\textbf{T}oken \textbf{Le}arning (Mettle), a simple and memory-efficient method for adapting large-scale pretrained transformer models to downstream audio-visual tasks. Instead of sequentially modifying the output feature distribution of the transformer backbone, Mettle utilizes a lightweight \textit{Layer-Centric Distillation (LCD)} module to distill in parallel the intact audio or visual features embedded by each transformer layer into compact meta-tokens. This distillation process considers both pretrained knowledge preservation and task-specific adaptation. The obtained meta-tokens can be directly applied to classification tasks, such as audio-visual event localization and audio-visual video parsing. To further support fine-grained segmentation tasks, such as audio-visual segmentation, we introduce a \textit{Meta-Token Injection (MTI)} module, which utilizes the audio and visual meta-tokens distilled from the top transformer layer to guide feature adaptation in earlier layers. Extensive experiments on multiple audiovisual benchmarks demonstrate that our method significantly reduces memory usage and training time while maintaining parameter efficiency and competitive accuracy.
Jinxing Zhou、Zhihui Li、Yongqiang Yu、Yanghao Zhou、Ruohao Guo、Guangyao Li、Yuxin Mao、Mingfei Han、Xiaojun Chang、Meng Wang
计算技术、计算机技术
Jinxing Zhou,Zhihui Li,Yongqiang Yu,Yanghao Zhou,Ruohao Guo,Guangyao Li,Yuxin Mao,Mingfei Han,Xiaojun Chang,Meng Wang.Mettle: Meta-Token Learning for Memory-Efficient Audio-Visual Adaptation[EB/OL].(2025-06-29)[2025-07-17].https://arxiv.org/abs/2506.23271.点此复制
评论