|国家预印本平台
首页|GatorTron: A Large Language Model for Clinical Natural Language Processing

GatorTron: A Large Language Model for Clinical Natural Language Processing

GatorTron: A Large Language Model for Clinical Natural Language Processing

来源:medRxiv_logomedRxiv
英文摘要

ABSTRACT ObjectiveTo develop a large pretrained clinical language model from scratch using transformer architecture; systematically examine how transformer models of different sizes could help 5 clinical natural language processing (NLP) tasks at different linguistic levels. MethodsWe created a large corpus with >90 billion words from clinical narratives (>82 billion words), scientific literature (6 billion words), and general English text (2.5 billion words). We developed GatorTron models from scratch using the BERT architecture of different sizes including 345 million, 3.9 billion, and 8.9 billion parameters, compared GatorTron with three existing transformer models in the clinical and biomedical domain on 5 different clinical NLP tasks including clinical concept extraction, relation extraction, semantic textual similarity, natural language inference, and medical question answering, to examine how large transformer models could help clinical NLP at different linguistic levels. Results and ConclusionGatorTron scaled up transformer-based clinical language models to a size of 8.9 billion parameters and achieved state-of-the-art performance on 5 clinical NLP tasks of different linguistic levels targeting various healthcare information documented in unstructured electronic health records (EHRs). The proposed GatorTron models performed remarkably better in much complex clinical NLP tasks such as natural language inference (9.6% and 7.5% improvements) and question answering (9.5% and 7.77% improvements) compared with existing smaller clinical transformer models (i.e., BioBERT and ClinicalBERT), demonstrating the potential of large transformer-based clinical models for advanced medical artificial intelligent (AI) applications such as question answering.

Shin Hoo Chang、Smith Kaleb E、Compas Colin、Flores Mona G、Zhang Ying、Hogan William R、Bian Jiang、Wu Yonghui、Magoc Tanja、Lipori Gloria、Shenkman Elizabeth A、PourNejatian Nima、Parisien Christopher、Harle Christopher A、Yang Xi、Mitchell Duane A、Martin Cheryl

NVIDIANVIDIANVIDIANVIDIAResearch Computing, University of FloridaDepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of FloridaDepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida||Cancer Informatics and eHealth core, University of Florida Health Cancer CenterDepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida||Cancer Informatics and eHealth core, University of Florida Health Cancer CenterIntegrated Data Repository Research Services, University of FloridaIntegrated Data Repository Research Services, University of Florida||University of Florida Health and Shands HospitalDepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of FloridaNVIDIANVIDIADepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida||Integrated Data Repository Research Services, University of FloridaDepartment of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida||Cancer Informatics and eHealth core, University of Florida Health Cancer CenterLillian S. Wells Department of Neurosurgery, UF Clinical and Translational Science Institute, University of FloridaNVIDIA

10.1101/2022.02.27.22271257

医学研究方法语言学临床医学

Natural Language ProcessingTransformer ModelDeep LearningElectronic Health Records

Shin Hoo Chang,Smith Kaleb E,Compas Colin,Flores Mona G,Zhang Ying,Hogan William R,Bian Jiang,Wu Yonghui,Magoc Tanja,Lipori Gloria,Shenkman Elizabeth A,PourNejatian Nima,Parisien Christopher,Harle Christopher A,Yang Xi,Mitchell Duane A,Martin Cheryl.GatorTron: A Large Language Model for Clinical Natural Language Processing[EB/OL].(2025-03-28)[2025-06-13].https://www.medrxiv.org/content/10.1101/2022.02.27.22271257.点此复制

评论