|国家预印本平台
首页|Mitigating Spurious Correlations in LLMs via Causality-Aware Post-Training

Mitigating Spurious Correlations in LLMs via Causality-Aware Post-Training

Mitigating Spurious Correlations in LLMs via Causality-Aware Post-Training

来源:Arxiv_logoArxiv
英文摘要

While large language models (LLMs) have demonstrated remarkable capabilities in language modeling, recent studies reveal that they often fail on out-of-distribution (OOD) samples due to spurious correlations acquired during pre-training. Here, we aim to mitigate such spurious correlations through causality-aware post-training (CAPT). By decomposing a biased prediction into two unbiased steps, known as \textit{event estimation} and \textit{event intervention}, we reduce LLMs' pre-training biases without incurring additional fine-tuning biases, thus enhancing the model's generalization ability. Experiments on the formal causal inference benchmark CLadder and the logical reasoning dataset PrOntoQA show that 3B-scale language models fine-tuned with CAPT can outperform both traditional SFT and larger LLMs on in-distribution (ID) and OOD tasks using only 100 ID fine-tuning samples, demonstrating the effectiveness and sample efficiency of CAPT.

Shurui Gui、Shuiwang Ji

计算技术、计算机技术

Shurui Gui,Shuiwang Ji.Mitigating Spurious Correlations in LLMs via Causality-Aware Post-Training[EB/OL].(2025-06-11)[2025-07-16].https://arxiv.org/abs/2506.09433.点此复制

评论