DPDANet:融合密集连接与自注意力机制的改进DPCNN文本分类模型
DPDANet: An Improved DPCNN Model for Text Classification with Dense Connections and Self-Attention Mechanism
摘要
[目的]针对海量评论数据的高效情感分析需求,提出DPDANet模型以提升文本分类性能。[方法]基于BERT构建的DPDANet融合了密集连接与注意力机制,通过优化DPCNN中层间的连接策略,增强特征流动与信息复用能力,从而更高效地利用浅层特征,并有效降低计算复杂度。[结果]将DPDANet与基于BERT的TextCNN、CNN-LSTM、DPCNN、DPCNN-BiGRU、Transformer、XLSTM、BERT及DPDBNet八类模型进行了对比实验。在四个文本分类数据集上,DPDANet分别取得了0.6679、0.9307、0.9278和0.6242的优异准确率,相较于DPCNN分别提升了6.47%、1.32%、0.72%与3.52%。[局限]模型在极短文本与多类别不均衡场景中仍存在泛化能力不足的问题。[结论] DPDANet在众多文本分类任务中均展现出更优的性能与效率,具备良好的应用前景。Abstract
[Objective] In response to the demand for efficient sentiment analysis of large-scale review data, this study proposes DPDANet model to enhance the performance of text classification.[Methods] The BERT-based DPDANet incorporates dense connections and an attention mechanism. By refining the inter-layer connection strategy of the DPCNN architecture, it enhances feature propagation and information reuse, thereby facilitating more efficient exploitation of shallow features and effectively reducing computational complexity.[Results] Comparative experiments were conducted between DPDANet and eight BERT-based models, including TextCNN, CNN-LSTM, DPCNN, DPCNN-BiGRU, Transformer, XLSTM, BERT, and DPDBNet. On four text classification datasets, DPDANet achieved outstanding accuracy scores of 0.6679, 0.9307, 0.9278 and 0.6242, representing improvements of 6.47%, 1.32%, 0.72% and 3.52%, respectively, over the baseline DPCNN model.[Limitations] The model still exhibits limited generalization capability in scenarios involving extremely short texts and imbalanced multi-class distributions.[Conclusions] DPDANet demonstrates superior performance and efficiency across a variety of text classification tasks, indicating strong potential for practical application.
评论