|国家预印本平台
首页|中文命名实体识别

中文命名实体识别

hinese Named Entity Recognition

中文摘要英文摘要

针对目前中文命名实体识别研究中存在的语义特征提取不充分、不全面等问题,Transformers(BERT)在各种相关 NLP 任务中显示出惊人的改进,并且已经提出了连续的变体来进一步提高预训练语言模型的性能。在本文中,我们的目标是重新审视中文预训练语言模型,以检验它们在非英语语言中的有效性。本文基于 RoBERT 模型进行微调,实验结果表明,在许多 NLP 任务上表现良好。

In response to the current problems of inadequate and incomplete semantic feature extraction in Chinese named entity recognition research, Transformers (BERT) has shown striking improvements in a variety of related NLP tasks, and successive variants have been proposed to further improve the performance<br />of pre-trained language models. In this paper, our goal is to revisit Chinese pre-trained language models to examine their effectiveness in non-English languages. This paper is based on the RoBERT model for fine-tuning, and experimental results show good performance on many NLP tasks.

10.12074/202401.00105V1

计算技术、计算机技术

命名实体识别预训练模型微调

Named entity recognitionpre-trained modelsfine-tuning

.中文命名实体识别[EB/OL].(2024-01-07)[2025-08-02].https://chinaxiv.org/abs/202401.00105.点此复制

评论