Assessing Thai Dialect Performance in LLMs with Automatic Benchmarks and Human Evaluation
Assessing Thai Dialect Performance in LLMs with Automatic Benchmarks and Human Evaluation
Large language models show promising results in various NLP tasks. Despite these successes, the robustness and consistency of LLMs in underrepresented languages remain largely unexplored, especially concerning local dialects. Existing benchmarks also focus on main dialects, neglecting LLMs' ability on local dialect texts. In this paper, we introduce a Thai local dialect benchmark covering Northern (Lanna), Northeastern (Isan), and Southern (Dambro) Thai, evaluating LLMs on five NLP tasks: summarization, question answering, translation, conversation, and food-related tasks. Furthermore, we propose a human evaluation guideline and metric for Thai local dialects to assess generation fluency and dialect-specific accuracy. Results show that LLM performance declines significantly in local Thai dialects compared to standard Thai, with only proprietary models like GPT-4o and Gemini2 demonstrating some fluency
Peerat Limkonchotiwat、Kanruethai Masuk、Surapon Nonesung、Chalermpun Mai-On、Sarana Nutanong、Wuttikorn Ponwitayarat、Potsawee Manakul
南亚语系(澳斯特罗-亚细亚语系)
Peerat Limkonchotiwat,Kanruethai Masuk,Surapon Nonesung,Chalermpun Mai-On,Sarana Nutanong,Wuttikorn Ponwitayarat,Potsawee Manakul.Assessing Thai Dialect Performance in LLMs with Automatic Benchmarks and Human Evaluation[EB/OL].(2025-04-08)[2025-04-26].https://arxiv.org/abs/2504.05898.点此复制
评论