|国家预印本平台
首页|Evaluating Code-Mixing in LLMs Across 18 Languages

Evaluating Code-Mixing in LLMs Across 18 Languages

Evaluating Code-Mixing in LLMs Across 18 Languages

来源:Arxiv_logoArxiv
英文摘要

Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language pairings and tasks, failing to adequately evaluate the code-mixing capabilities of large language models (LLMs). Despite the significance of code-mixing for multilingual users, research on LLMs in this context remains limited. Additionally, current methods for generating code-mixed data are underdeveloped. In this paper, we conduct a comprehensive evaluation of LLMs' performance on code-mixed data across 18 languages from seven language families. We also propose a novel approach for generating synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our analysis reveals consistent underperformance of LLMs on code-mixed datasets involving multiple language families. We suggest that improvements in training data size, model scale, and few-shot learning could enhance their performance.

Yilun Yang、Yekun Chai

语言学

Yilun Yang,Yekun Chai.Evaluating Code-Mixing in LLMs Across 18 Languages[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18791.点此复制

评论