Positional Attention for Efficient BERT-Based Named Entity Recognition
Positional Attention for Efficient BERT-Based Named Entity Recognition
This paper presents a framework for Named Entity Recognition (NER) leveraging the Bidirectional Encoder Representations from Transformers (BERT) model in natural language processing (NLP). NER is a fundamental task in NLP with broad applicability across downstream applications. While BERT has established itself as a state-of-the-art model for entity recognition, fine-tuning it from scratch for each new application is computationally expensive and time-consuming. To address this, we propose a cost-efficient approach that integrates positional attention mechanisms into the entity recognition process and enables effective customization using pre-trained parameters. The framework is evaluated on a Kaggle dataset derived from the Groningen Meaning Bank corpus and achieves strong performance with fewer training epochs. This work contributes to the field by offering a practical solution for reducing the training cost of BERT-based NER systems while maintaining high accuracy.
Mo Sun、Siheng Xiong、Yuankai Cai、Bowen Zuo
计算技术、计算机技术
Mo Sun,Siheng Xiong,Yuankai Cai,Bowen Zuo.Positional Attention for Efficient BERT-Based Named Entity Recognition[EB/OL].(2025-05-03)[2025-06-03].https://arxiv.org/abs/2505.01868.点此复制
评论