|国家预印本平台
首页|Fine-Tuning Large Language Models for Scientific Text Classification: A Comparative Study

Fine-Tuning Large Language Models for Scientific Text Classification: A Comparative Study

Fine-Tuning Large Language Models for Scientific Text Classification: A Comparative Study

来源:Arxiv_logoArxiv
英文摘要

The exponential growth of online textual content across diverse domains has necessitated advanced methods for automated text classification. Large Language Models (LLMs) based on transformer architectures have shown significant success in this area, particularly in natural language processing (NLP) tasks. However, general-purpose LLMs often struggle with domain-specific content, such as scientific texts, due to unique challenges like specialized vocabulary and imbalanced data. In this study, we fine-tune four state-of-the-art LLMs BERT, SciBERT, BioBERT, and BlueBERT on three datasets derived from the WoS-46985 dataset to evaluate their performance in scientific text classification. Our experiments reveal that domain-specific models, particularly SciBERT, consistently outperform general-purpose models in both abstract-based and keyword-based classification tasks. Additionally, we compare our achieved results with those reported in the literature for deep learning models, further highlighting the advantages of LLMs, especially when utilized in specific domains. The findings emphasize the importance of domain-specific adaptations for LLMs to enhance their effectiveness in specialized text classification tasks.

G¨¢bor Kert¨|sz、Zhyar Rzgar K Rostam

自然科学研究方法信息科学、信息技术

G¨¢bor Kert¨|sz,Zhyar Rzgar K Rostam.Fine-Tuning Large Language Models for Scientific Text Classification: A Comparative Study[EB/OL].(2024-11-27)[2025-08-02].https://arxiv.org/abs/2412.00098.点此复制

评论