|国家预印本平台
首页|Aligning Generative Speech Enhancement with Human Preferences via Direct Preference Optimization

Aligning Generative Speech Enhancement with Human Preferences via Direct Preference Optimization

Aligning Generative Speech Enhancement with Human Preferences via Direct Preference Optimization

来源:Arxiv_logoArxiv
英文摘要

This work investigates speech enhancement (SE) from the perspective of language models (LMs). We propose a novel method that leverages Direct Preference Optimization (DPO) to improve the perceptual quality of enhanced speech. Using UTMOS, a neural MOS prediction model, as a proxy for human ratings, our approach guides optimization toward perceptually preferred outputs. This differs from existing LM-based SE methods that focus on maximizing the likelihood of clean speech tokens, which may misalign with human perception and degrade quality despite low prediction error. Experiments on the 2020 Deep Noise Suppression Challenge test sets demonstrate that applying DPO to a pretrained LM-based SE model yields consistent improvements across various speech quality metrics, with relative gains of up to 56%. To our knowledge, this is the first application of DPO to SE and the first to incorporate proxy perceptual feedback into LM-based SE training, pointing to a promising direction for perceptually aligned SE.

Haoyang Li、Nana Hou、Yuchen Hu、Jixun Yao、Sabato Marco Siniscalchi、Eng Siong Chng

计算技术、计算机技术

Haoyang Li,Nana Hou,Yuchen Hu,Jixun Yao,Sabato Marco Siniscalchi,Eng Siong Chng.Aligning Generative Speech Enhancement with Human Preferences via Direct Preference Optimization[EB/OL].(2025-07-14)[2025-08-02].https://arxiv.org/abs/2507.09929.点此复制

评论