Influence Functions for Preference Dataset Pruning
Influence Functions for Preference Dataset Pruning
Language models are commonly fine-tuned via reinforcement learning to alter their behavior or elicit new capabilities. Datasets used for these purposes, and particularly human preference datasets, are often noisy. The relatively small size post-training datasets, combined with parameter-efficient fine-tuning methods, enable the use of influence functions approximations to detect and prune training examples that are harmful to performance on a validation set. In this work, we adapt the TL;DR dataset for reward model training to demonstrate how conjugate-gradient approximated influence functions can be used to filter datasets. In our experiments, influence function filtering yields a small retraining accuracy uplift of 1.5% after removing 10% of training examples. We also show that gradient similarity outperforms influence functions for detecting helpful training examples. This suggests that local curvature is important for detecting harmful training examples, but less so for identifying helpful examples.
Daniel Fein、Gabriela Aranguiz-Dias
计算技术、计算机技术
Daniel Fein,Gabriela Aranguiz-Dias.Influence Functions for Preference Dataset Pruning[EB/OL].(2025-07-18)[2025-08-10].https://arxiv.org/abs/2507.14344.点此复制
评论