Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency
Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency
Diffusion models are a class of generative models that have been recently used for speech enhancement with remarkable success but are computationally expensive at inference time. Therefore, these models are impractical for processing streaming data in real-time. In this work, we adapt a sliding window diffusion framework to the speech enhancement task. Our approach progressively corrupts speech signals through time, assigning more noise to frames close to the present in a buffer. This approach outputs denoised frames with a delay proportional to the chosen buffer size, enabling a trade-off between performance and latency. Empirical results demonstrate that our method outperforms standard diffusion models and runs efficiently on a GPU, achieving an input-output latency in the order of 0.3 to 1 seconds. This marks the first practical diffusion-based solution for online speech enhancement.
Bunlong Lay、Rostilav Makarov、Timo Gerkmann
计算技术、计算机技术
Bunlong Lay,Rostilav Makarov,Timo Gerkmann.Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency[EB/OL].(2025-06-03)[2025-07-01].https://arxiv.org/abs/2506.02908.点此复制
评论