|国家预印本平台
首页|对抗幻觉:垂直领域中大语言模型的应用策略探讨一以中医知识问答领域为例

对抗幻觉:垂直领域中大语言模型的应用策略探讨一以中医知识问答领域为例

ombating Hallucinations: Application Strategies of Large Language Models in Vertical Domains - A Case Study in the Field of Traditional Chinese Medicine

中文摘要英文摘要

目的]本文旨在以中医知识问答领域为例,分析以知识库资源为例的非结构化知识和以知识图谱资源为例的结构化知识在提升大语言模型对抗幻觉效果上的差异性,并基于此进一步探讨大语言模型在垂直领域对抗幻觉能力的提升策略。[方法]设计实验利用外部知识配合提示工程的方法,在中医知识问答领域进行知识库资源和知识图谱资源在提示效果上的差异性分析,并探讨动态三元组策略和融合微调策略等进行大语言模型对抗幻觉优化的优越性。[结果]实验结果表明与知识库非结构化知识提示相比,知识图谱结构化知识提示在准确率、召回率和F1值方面表现更佳,分别比知识库提示高出1.9%、2.42%和2.2%,为71.44%、60.76%和65.31%;基于此进行了进一步优化策略分析后发现,动态三元组策略融合微调后在对抗幻觉上效果最佳,准确率、召回率和F1值分别达到了72.47%、65.87%、68.62%。[局限]本文的研究领域单一,目前只在中医问答领域进行了测试,尚需在广泛科研领域验证其泛化能力。[结论]本研究证明了在中医领域,知识图谱结构化知识在减少幻觉现象和提升模型回复准确性方面优于传统非结构化知识,揭示了结构化知识在增强模型理解能力中的关键作用;微调策略和知识资源的融合使用为大语言模型提供了一种有效的性能提升路径。本文为大语言模型融合外部知识以提升知识服务提供了理论依据和方法支持。

Objective] This paper aims to analyze the differences in combating hallucinations in large language models between unstructured knowledge exemplified by knowledge base resources and structured knowledge exemplified by knowledge graph resources, using the Traditional Chinese Medicine (TCM) Q&A domain as a case study. It further discusses strategies to enhance the capability of large language models to combat hallucinations in vertical domains based on these findings.[Methods] The study designed experiments using external knowledge combined with prompt engineering techniques to analyze the differences in prompting effects between knowledge base resources and knowledge graph resources in the TCM Q&A domain. It also explores the superiority of dynamic triplet strategies and integrated fine-tuning strategies in optimizing large language models against hallucinations.[Results] Experimental results show that compared to prompts from unstructured knowledge in the knowledge base, prompts from structured knowledge in the knowledge graph perform better in terms of accuracy, recall, and F1 score, improving by 1.9%, 2.42%, and 2.2% respectively, reaching 71.44%, 60.76%, and 65.31%. Further analysis of optimization strategies revealed that the combination of dynamic triplet strategy and fine-tuning yielded the best effects against hallucinations, achieving accuracy, recall, and F1 scores of 72.47%, 65.87%, and 68.62%, respectively.[Limitations] This study is limited to a single field, having been tested only in the domain of Traditional Chinese Medicine Q&A, and its generalizability needs to be validated in a broader range of scientific fields.[Conclusions] This study has demonstrated that in the field of Traditional Chinese Medicine, structured knowledge from knowledge graphs surpasses traditional unstructured knowledge in reducing hallucinations and enhancing the accuracy of model responses. It reveals the critical role of structured knowledge in boosting model comprehension abilities; the integration of fine-tuning strategies with knowledge resources provides an effective pathway for performance enhancement in large language models. This paper provides theoretical justification and methodological support for integrating external knowledge into large language models to enhance knowledge services.

曹智勋、陈静

科学交流与知识传播

对抗幻觉大语言模型提示工程知识图谱中医问答

ombating hallucinationsLarge language modelsPrompt engineeringKnowledge graphsTraditional Chinese Medicine Q&A.

曹智勋,陈静.对抗幻觉:垂直领域中大语言模型的应用策略探讨一以中医知识问答领域为例[EB/OL].(2024-11-19)[2024-11-21].https://chinaxiv.org/abs/202411.00205.点此复制

评论