A Study on Speech Assessment with Visual Cues
A Study on Speech Assessment with Visual Cues
Non-intrusive assessment of speech quality and intelligibility is essential when clean reference signals are unavailable. In this work, we propose a multimodal framework that integrates audio features and visual cues to predict PESQ and STOI scores. It employs a dual-branch architecture, where spectral features are extracted using STFT, and visual embeddings are obtained via a visual encoder. These features are then fused and processed by a CNN-BLSTM with attention, followed by multi-task learning to simultaneously predict PESQ and STOI. Evaluations on the LRS3-TED dataset, augmented with noise from the DEMAND corpus, show that our model outperforms the audio-only baseline. Under seen noise conditions, it improves LCC by 9.61% (0.8397->0.9205) for PESQ and 11.47% (0.7403->0.8253) for STOI. These results highlight the effectiveness of incorporating visual cues in enhancing the accuracy of non-intrusive speech assessment.
Shafique Ahmed、Ryandhimas E. Zezario、Nasir Saleem、Amir Hussain、Hsin-Min Wang、Yu Tsao
电子技术应用
Shafique Ahmed,Ryandhimas E. Zezario,Nasir Saleem,Amir Hussain,Hsin-Min Wang,Yu Tsao.A Study on Speech Assessment with Visual Cues[EB/OL].(2025-06-11)[2025-06-27].https://arxiv.org/abs/2506.09549.点此复制
评论