Evaluating Large Language Models on Multiword Expressions in Multilingual and Code-Switched Contexts
Evaluating Large Language Models on Multiword Expressions in Multilingual and Code-Switched Contexts
Multiword expressions, characterised by non-compositional meanings and syntactic irregularities, are an example of nuanced language. These expressions can be used literally or idiomatically, leading to significant changes in meaning. While large language models have demonstrated strong performance across many tasks, their ability to handle such linguistic subtleties remains uncertain. Therefore, this study evaluates how state-of-the-art language models process the ambiguity of potentially idiomatic multiword expressions, particularly in contexts that are less frequent, where models are less likely to rely on memorisation. By evaluating models across in Portuguese and Galician, in addition to English, and using a novel code-switched dataset and a novel task, we find that large language models, despite their strengths, struggle with nuanced language. In particular, we find that the latest models, including GPT-4, fail to outperform the xlm-roBERTa-base baselines in both detection and semantic tasks, with especially poor performance on the novel tasks we introduce, despite its similarity to existing tasks. Overall, our results demonstrate that multiword expressions, especially those which are ambiguous, continue to be a challenge to models.
Frances Laureano De Leon、Harish Tayyar Madabushi、Mark G. Lee
语言学印欧语系
Frances Laureano De Leon,Harish Tayyar Madabushi,Mark G. Lee.Evaluating Large Language Models on Multiword Expressions in Multilingual and Code-Switched Contexts[EB/OL].(2025-04-10)[2025-06-29].https://arxiv.org/abs/2504.20051.点此复制
评论