|国家预印本平台
首页|DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models

DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models

DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Data attribution methods quantify the influence of training data on model outputs and are becoming increasingly relevant for a wide range of LLM research and applications, including dataset curation, model interpretability, data valuation. However, there remain critical gaps in systematic LLM-centric evaluation of data attribution methods. To this end, we introduce DATE-LM (Data Attribution Evaluation in Language Models), a unified benchmark for evaluating data attribution methods through real-world LLM applications. DATE-LM measures attribution quality through three key tasks -- training data selection, toxicity/bias filtering, and factual attribution. Our benchmark is designed for ease of use, enabling researchers to configure and run large-scale evaluations across diverse tasks and LLM architectures. Furthermore, we use DATE-LM to conduct a large-scale evaluation of existing data attribution methods. Our findings show that no single method dominates across all tasks, data attribution methods have trade-offs with simpler baselines, and method performance is sensitive to task-specific evaluation design. Finally, we release a public leaderboard for quick comparison of methods and to facilitate community engagement. We hope DATE-LM serves as a foundation for future data attribution research in LLMs.

Cathy Jiao、Yijun Pan、Emily Xiao、Daisy Sheng、Niket Jain、Hanzhang Zhao、Ishita Dasgupta、Jiaqi W. Ma、Chenyan Xiong

计算技术、计算机技术

Cathy Jiao,Yijun Pan,Emily Xiao,Daisy Sheng,Niket Jain,Hanzhang Zhao,Ishita Dasgupta,Jiaqi W. Ma,Chenyan Xiong.DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models[EB/OL].(2025-07-12)[2025-07-22].https://arxiv.org/abs/2507.09424.点此复制

评论