|国家预印本平台
首页|Understanding by Understanding Not: Modeling Negation in Language Models

Understanding by Understanding Not: Modeling Negation in Language Models

Understanding by Understanding Not: Modeling Negation in Language Models

来源:Arxiv_logoArxiv
英文摘要

Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained language models often handle negation incorrectly. To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus. By training BERT with the resulting combined objective we reduce the mean top~1 error rate to 4% on the negated LAMA dataset. We also see some improvements on the negated NLI benchmarks.

R Devon Hjelm、Dzmitry Bahdanau、Aaron Courville、Alessandro Sordoni、Arian Hosseini、Siva Reddy

语言学

R Devon Hjelm,Dzmitry Bahdanau,Aaron Courville,Alessandro Sordoni,Arian Hosseini,Siva Reddy.Understanding by Understanding Not: Modeling Negation in Language Models[EB/OL].(2021-05-07)[2025-08-02].https://arxiv.org/abs/2105.03519.点此复制

评论