|国家预印本平台
首页|LLM Inference Enhanced by External Knowledge: A Survey

LLM Inference Enhanced by External Knowledge: A Survey

LLM Inference Enhanced by External Knowledge: A Survey

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in large language models (LLMs) have enhanced natural-language reasoning. However, their limited parametric memory and susceptibility to hallucination present persistent challenges for tasks requiring accurate, context-based inference. To overcome these limitations, an increasing number of studies have proposed leveraging external knowledge to enhance LLMs. This study offers a systematic exploration of strategies for using external knowledge to enhance LLMs, beginning with a taxonomy that categorizes external knowledge into unstructured and structured data. We then focus on structured knowledge, presenting distinct taxonomies for tables and knowledge graphs (KGs), detailing their integration paradigms with LLMs, and reviewing representative methods. Our comparative analysis further highlights the trade-offs among interpretability, scalability, and performance, providing insights for developing trustworthy and generalizable knowledge-enhanced LLMs.

Yu-Hsuan Lin、Qian-Hui Chen、Yi-Jie Cheng、Jia-Ren Zhang、Yi-Hung Liu、Liang-Yu Hsia、Yun-Nung Chen

计算技术、计算机技术

Yu-Hsuan Lin,Qian-Hui Chen,Yi-Jie Cheng,Jia-Ren Zhang,Yi-Hung Liu,Liang-Yu Hsia,Yun-Nung Chen.LLM Inference Enhanced by External Knowledge: A Survey[EB/OL].(2025-05-30)[2025-07-02].https://arxiv.org/abs/2505.24377.点此复制

评论