Data-driven Error Estimation: Upper Bounding Multiple Errors without Class Complexity as Input
Data-driven Error Estimation: Upper Bounding Multiple Errors without Class Complexity as Input
Constructing confidence intervals that are simultaneously valid across a class of estimates is central for tasks such as multiple mean estimation, bounding generalization error in machine learning, and adaptive experimental design. We frame this as an "error estimation problem," where the goal is to determine a high-probability upper bound on the maximum error for a class of estimates. We propose an entirely data-driven approach that derives such bounds for both finite and infinite class settings, naturally adapting to a potentially unknown correlation structure of random errors. Notably, our method does not require class complexity as an input, overcoming a major limitation of existing approaches such as union bounding and bounds based on Talagrand's inequality. In this paper, we present our simple yet general solution and demonstrate its flexibility through applications ranging from constructing multiple simultaneously valid confidence intervals to optimizing exploration in contextual bandit algorithms.
Sanath Kumar Krishnamurthy、Susan Athey、Anna Lyubarskaja、Emma Brunskill
计算技术、计算机技术
Sanath Kumar Krishnamurthy,Susan Athey,Anna Lyubarskaja,Emma Brunskill.Data-driven Error Estimation: Upper Bounding Multiple Errors without Class Complexity as Input[EB/OL].(2024-05-07)[2025-06-12].https://arxiv.org/abs/2405.04636.点此复制
评论