|国家预印本平台
首页|Study of Lightweight Transformer Architectures for Single-Channel Speech Enhancement

Study of Lightweight Transformer Architectures for Single-Channel Speech Enhancement

Study of Lightweight Transformer Architectures for Single-Channel Speech Enhancement

来源:Arxiv_logoArxiv
英文摘要

In speech enhancement, achieving state-of-the-art (SotA) performance while adhering to the computational constraints on edge devices remains a formidable challenge. Networks integrating stacked temporal and spectral modelling effectively leverage improved architectures such as transformers; however, they inevitably incur substantial computational complexity and model expansion. Through systematic ablation analysis on transformer-based temporal and spectral modelling, we demonstrate that the architecture employing streamlined Frequency-Time-Frequency (FTF) stacked transformers efficiently learns global dependencies within causal context, while avoiding considerable computational demands. Utilising discriminators in training further improves learning efficacy and enhancement without introducing additional complexity during inference. The proposed lightweight, causal, transformer-based architecture with adversarial training (LCT-GAN) yields SoTA performance on instrumental metrics among contemporary lightweight models, but with far less overhead. Compared to DeepFilterNet2, the LCT-GAN only requires 6% of the parameters, at similar complexity and performance. Against CCFNet+(Lite), LCT-GAN saves 9% in parameters and 10% in multiply-accumulate operations yet yielding improved performance. Further, the LCT-GAN even outperforms more complex, common baseline models on widely used test datasets.

Haixin Zhao、Nilesh Madhu

计算技术、计算机技术

Haixin Zhao,Nilesh Madhu.Study of Lightweight Transformer Architectures for Single-Channel Speech Enhancement[EB/OL].(2025-05-27)[2025-07-19].https://arxiv.org/abs/2505.21057.点此复制

评论