|国家预印本平台
首页|Model as Loss: A Self-Consistent Training Paradigm

Model as Loss: A Self-Consistent Training Paradigm

Model as Loss: A Self-Consistent Training Paradigm

来源:Arxiv_logoArxiv
英文摘要

Conventional methods for speech enhancement rely on handcrafted loss functions (e.g., time or frequency domain losses) or deep feature losses (e.g., using WavLM or wav2vec), which often fail to capture subtle signal properties essential for optimal performance. To address this, we propose Model as Loss, a novel training paradigm that utilizes the encoder from the same model as a loss function to guide the training. The Model as Loss paradigm leverages the encoder's task-specific feature space, optimizing the decoder to produce output consistent with perceptual and task-relevant characteristics of the clean signal. By using the encoder's learned features as a loss function, this framework enforces self-consistency between the clean reference speech and the enhanced model output. Our approach outperforms pre-trained deep feature losses on standard speech enhancement benchmarks, offering better perceptual quality and robust generalization to both in-domain and out-of-domain datasets.

Saisamarth Rajesh Phaye、Milos Cernak、Andrew Harper

计算技术、计算机技术

Saisamarth Rajesh Phaye,Milos Cernak,Andrew Harper.Model as Loss: A Self-Consistent Training Paradigm[EB/OL].(2025-05-27)[2025-06-06].https://arxiv.org/abs/2505.21156.点此复制

评论