|国家预印本平台
首页|Data Leakage and Redundancy in the LIT-PCBA Benchmark

Data Leakage and Redundancy in the LIT-PCBA Benchmark

Data Leakage and Redundancy in the LIT-PCBA Benchmark

来源:Arxiv_logoArxiv
英文摘要

LIT-PCBA is widely used to benchmark virtual screening models, but our audit reveals that it is fundamentally compromised. We find extensive data leakage and molecular redundancy across its splits, including 2D-identical ligands within and across partitions, pervasive analog overlap, and low-diversity query sets. In ALDH1 alone, for instance, 323 active training -- validation analog pairs occur at ECFP4 Tanimoto similarity $\geq 0.6$; across all targets, 2,491 2D-identical inactives appear in both training and validation, with very few corresponding actives. These overlaps allow models to succeed through scaffold memorization rather than generalization, inflating enrichment factors and AUROC scores. These flaws are not incidental -- they are so severe that a trivial memorization-based baseline with no learnable parameters can exploit them to match or exceed the reported performance of state-of-the-art deep learning and 3D-similarity models. As a result, nearly all published results on LIT-PCBA are undermined. Even models evaluated in "zero-shot" mode are affected by analog leakage into the query set, weakening claims of generalization. In its current form, the benchmark does not measure a model's ability to recover novel chemotypes and should not be taken as evidence of methodological progress. All code, data, and baseline implementations are available at: https://github.com/sievestack/LIT-PCBA-audit

Amber Huang、Ian Scott Knight、Slava Naprienko

生物科学研究方法、生物科学研究技术

Amber Huang,Ian Scott Knight,Slava Naprienko.Data Leakage and Redundancy in the LIT-PCBA Benchmark[EB/OL].(2025-08-07)[2025-08-11].https://arxiv.org/abs/2507.21404.点此复制

评论