|国家预印本平台
首页|Beyond Memorization: Assessing Semantic Generalization in Large Language Models Using Phrasal Constructions

Beyond Memorization: Assessing Semantic Generalization in Large Language Models Using Phrasal Constructions

Beyond Memorization: Assessing Semantic Generalization in Large Language Models Using Phrasal Constructions

来源:Arxiv_logoArxiv
英文摘要

The web-scale of pretraining data has created an important evaluation challenge: to disentangle linguistic competence on cases well-represented in pretraining data from generalization to out-of-domain language, specifically the dynamic, real-world instances less common in pretraining data. To this end, we construct a diagnostic evaluation to systematically assess natural language understanding in LLMs by leveraging Construction Grammar (CxG). CxG provides a psycholinguistically grounded framework for testing generalization, as it explicitly links syntactic forms to abstract, non-lexical meanings. Our novel inference evaluation dataset consists of English phrasal constructions, for which speakers are known to be able to abstract over commonplace instantiations in order to understand and produce creative instantiations. Our evaluation dataset uses CxG to evaluate two central questions: first, if models can 'understand' the semantics of sentences for instances that are likely to appear in pretraining data less often, but are intuitive and easy for people to understand. Second, if LLMs can deploy the appropriate constructional semantics given constructions that are syntactically identical but with divergent meanings. Our results demonstrate that state-of-the-art models, including GPT-o1, exhibit a performance drop of over 40% on our second task, revealing a failure to generalize over syntactically identical forms to arrive at distinct constructional meanings in the way humans do. We make our novel dataset and associated experimental data, including prompts and model responses, publicly available.

Wesley Scivetti、Melissa Torgbi、Austin Blodgett、Mollie Shichman、Taylor Hudson、Claire Bonial、Harish Tayyar Madabushi

语言学

Wesley Scivetti,Melissa Torgbi,Austin Blodgett,Mollie Shichman,Taylor Hudson,Claire Bonial,Harish Tayyar Madabushi.Beyond Memorization: Assessing Semantic Generalization in Large Language Models Using Phrasal Constructions[EB/OL].(2025-08-13)[2025-08-31].https://arxiv.org/abs/2501.04661.点此复制

评论