Evaluating Speech-to-Text x LLM x Text-to-Speech Combinations for AI Interview Systems
Evaluating Speech-to-Text x LLM x Text-to-Speech Combinations for AI Interview Systems
Voice-based conversational AI systems increasingly rely on cascaded architectures combining speech-to-text (STT), large language models (LLMs), and text-to-speech (TTS) components. However, systematic evaluation of different component combinations in production settings remains understudied. We present a large-scale empirical comparison of STT x LLM x TTS stacks using data from over 300,000 AI-conducted job interviews. We develop an automated evaluation framework using LLM-as-a-Judge to assess conversational quality, technical accuracy, and skill assessment capabilities. Our analysis of four production configurations reveals that Google STT paired with GPT-4.1 significantly outperforms alternatives in both conversational and technical quality metrics. Surprisingly, we find that objective quality metrics correlate weakly with user satisfaction scores, suggesting that user experience in voice-based AI systems depends on factors beyond technical performance. Our findings provide practical guidance for selecting components in multimodal conversational AI systems and contribute a validated evaluation methodology for voice-based interactions.
Seyed Shahabeddin Mousavi、Nima Yazdani、Ali Ansari、Aruj Mahajan、Amirhossein Afsharrad
通信无线通信计算技术、计算机技术电子技术应用
Seyed Shahabeddin Mousavi,Nima Yazdani,Ali Ansari,Aruj Mahajan,Amirhossein Afsharrad.Evaluating Speech-to-Text x LLM x Text-to-Speech Combinations for AI Interview Systems[EB/OL].(2025-07-15)[2025-08-10].https://arxiv.org/abs/2507.16835.点此复制
评论