|国家预印本平台
首页|Vision-Integrated High-Quality Neural Speech Coding

Vision-Integrated High-Quality Neural Speech Coding

Vision-Integrated High-Quality Neural Speech Coding

来源:Arxiv_logoArxiv
英文摘要

This paper proposes a novel vision-integrated neural speech codec (VNSC), which aims to enhance speech coding quality by leveraging visual modality information. In VNSC, the image analysis-synthesis module extracts visual features from lip images, while the feature fusion module facilitates interaction between the image analysis-synthesis module and the speech coding module, transmitting visual information to assist the speech coding process. Depending on whether visual information is available during the inference stage, the feature fusion module integrates visual features into the speech coding module using either explicit integration or implicit distillation strategies. Experimental results confirm that integrating visual information effectively improves the quality of the decoded speech and enhances the noise robustness of the neural speech codec, without increasing the bitrate.

Yao Guo、Yang Ai、Rui-Chen Zheng、Hui-Peng Du、Xiao-Hang Jiang、Zhen-Hua Ling

通信无线通信

Yao Guo,Yang Ai,Rui-Chen Zheng,Hui-Peng Du,Xiao-Hang Jiang,Zhen-Hua Ling.Vision-Integrated High-Quality Neural Speech Coding[EB/OL].(2025-05-29)[2025-06-27].https://arxiv.org/abs/2505.23379.点此复制

评论