Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs
Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs
Large Language Models (LLMs) are powerful yet prone to generating factual errors, commonly referred to as hallucinations. We present a lightweight, interpretable framework for knowledge-aware self-correction of LLM outputs using structured memory graphs based on RDF triples. Without retraining or fine-tuning, our method post-processes model outputs and corrects factual inconsistencies via external semantic memory. We demonstrate the approach using DistilGPT-2 and show promising results on simple factual prompts.
Swayamjit Saha
计算技术、计算机技术
Swayamjit Saha.Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs[EB/OL].(2025-07-07)[2025-07-23].https://arxiv.org/abs/2507.04625.点此复制
评论