How do language models learn facts? Dynamics, curricula and hallucinations
How do language models learn facts? Dynamics, curricula and hallucinations
Large language models accumulate vast knowledge during pre-training, yet the dynamics governing this acquisition remain poorly understood. This work investigates the learning dynamics of language models on a synthetic factual recall task, uncovering three key findings: First, language models learn in three phases, exhibiting a performance plateau before acquiring precise factual knowledge. Mechanistically, this plateau coincides with the formation of attention-based circuits that support recall. Second, the training data distribution significantly impacts learning dynamics, as imbalanced distributions lead to shorter plateaus. Finally, hallucinations emerge simultaneously with knowledge, and integrating new knowledge into the model through fine-tuning is challenging, as it quickly corrupts its existing parametric memories. Our results emphasize the importance of data distribution in knowledge acquisition and suggest novel data scheduling strategies to accelerate neural network training.
Nicolas Zucchet、Jörg Bornschein、Stephanie Chan、Andrew Lampinen、Razvan Pascanu、Soham De
计算技术、计算机技术
Nicolas Zucchet,Jörg Bornschein,Stephanie Chan,Andrew Lampinen,Razvan Pascanu,Soham De.How do language models learn facts? Dynamics, curricula and hallucinations[EB/OL].(2025-07-24)[2025-08-16].https://arxiv.org/abs/2503.21676.点此复制
评论