|国家预印本平台
首页|Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation

Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation

Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation

来源:Arxiv_logoArxiv
英文摘要

Existing causal speech separation models often underperform compared to non-causal models due to difficulties in retaining historical information. To address this, we propose the Time-Frequency Attention Cache Memory (TFACM) model, which effectively captures spatio-temporal relationships through an attention mechanism and cache memory (CM) for historical information storage. In TFACM, an LSTM layer captures frequency-relative positions, while causal modeling is applied to the time dimension using local and global representations. The CM module stores past information, and the causal attention refinement (CAR) module further enhances time-based feature representations for finer granularity. Experimental results showed that TFACM achieveed comparable performance to the SOTA TF-GridNet-Causal model, with significantly lower complexity and fewer trainable parameters. For more details, visit the project page: https://cslikai.cn/TFACM/.

Runxuan Yang、Xiaolin Hu、Guo Chen、Kai Li

通信无线通信

Runxuan Yang,Xiaolin Hu,Guo Chen,Kai Li.Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation[EB/OL].(2025-05-19)[2025-06-13].https://arxiv.org/abs/2505.13094.点此复制

评论