AI-Based Speaking Assistant: Supporting Non-Native Speakers' Speaking in Real-Time Multilingual Communication
AI-Based Speaking Assistant: Supporting Non-Native Speakers' Speaking in Real-Time Multilingual Communication
Non-native speakers (NNSs) often face speaking challenges in real-time multilingual communication, such as struggling to articulate their thoughts. To address this issue, we developed an AI-based speaking assistant (AISA) that provides speaking references for NNSs based on their input queries, task background, and conversation history. To explore NNSs' interaction with AISA and its impact on NNSs' speaking during real-time multilingual communication, we conducted a mixed-method study involving a within-subject experiment and follow-up interviews. In the experiment, two native speakers (NSs) and one NNS formed a team (31 teams in total) and completed two collaborative tasks--one with access to the AISA and one without. Overall, our study revealed four types of AISA input patterns among NNSs, each reflecting different levels of effort and language preferences. Although AISA did not improve NNSs' speaking competence, follow-up interviews revealed that it helped improve the logical flow and depth of their speech. Moreover, the additional multitasking introduced by AISA, such as entering and reviewing system output, potentially elevated NNSs' workload and anxiety. Based on these observations, we discuss the pros and cons of implementing tools to assist NNS in real-time multilingual communication and offer design recommendations.
Peinuan Qin、Zicheng Zhu、Naomi Yamashita、Yitian Yang、Keita Suga、Yi-Chieh Lee
语言学计算技术、计算机技术
Peinuan Qin,Zicheng Zhu,Naomi Yamashita,Yitian Yang,Keita Suga,Yi-Chieh Lee.AI-Based Speaking Assistant: Supporting Non-Native Speakers' Speaking in Real-Time Multilingual Communication[EB/OL].(2025-05-02)[2025-06-19].https://arxiv.org/abs/2505.01678.点此复制
评论