Daunce: Data Attribution through Uncertainty Estimation
Daunce: Data Attribution through Uncertainty Estimation
Training data attribution (TDA) methods aim to identify which training examples influence a model's predictions on specific test data most. By quantifying these influences, TDA supports critical applications such as data debugging, curation, and valuation. Gradient-based TDA methods rely on gradients and second-order information, limiting their applicability at scale. While recent random projection-based methods improve scalability, they often suffer from degraded attribution accuracy. Motivated by connections between uncertainty and influence functions, we introduce Daunce - a simple yet effective data attribution approach through uncertainty estimation. Our method operates by fine-tuning a collection of perturbed models and computing the covariance of per-example losses across these models as the attribution score. Daunce is scalable to large language models (LLMs) and achieves more accurate attribution compared to existing TDA methods. We validate Daunce on tasks ranging from vision tasks to LLM fine-tuning, and further demonstrate its compatibility with black-box model access. Applied to OpenAI's GPT models, our method achieves, to our knowledge, the first instance of data attribution on proprietary LLMs.
Xingyuan Pan、Chenlu Ye、Joseph Melkonian、Jiaqi W. Ma、Tong Zhang
计算技术、计算机技术
Xingyuan Pan,Chenlu Ye,Joseph Melkonian,Jiaqi W. Ma,Tong Zhang.Daunce: Data Attribution through Uncertainty Estimation[EB/OL].(2025-05-29)[2025-06-14].https://arxiv.org/abs/2505.23223.点此复制
评论