Theoretical guarantees for neural estimators in parametric statistics
Theoretical guarantees for neural estimators in parametric statistics
Neural estimators are simulation-based estimators for the parameters of a family of statistical models, which build a direct mapping from the sample to the parameter vector. They benefit from the versatility of available network architectures and efficient training methods developed in the field of deep learning. Neural estimators are amortized in the sense that, once trained, they can be applied to any new data set with almost no computational cost. While many papers have shown very good performance of these methods in simulation studies and real-world applications, so far no statistical guarantees are available to support these observations theoretically. In this work, we study the risk of neural estimators by decomposing it into several terms that can be analyzed separately. We formulate easy-to-check assumptions ensuring that each term converges to zero, and we verify them for popular applications of neural estimators. Our results provide a general recipe to derive theoretical guarantees also for broader classes of architectures and estimation problems.
Almut R??dder、Manuel Hentschel、Sebastian Engelke
计算技术、计算机技术
Almut R??dder,Manuel Hentschel,Sebastian Engelke.Theoretical guarantees for neural estimators in parametric statistics[EB/OL].(2025-06-23)[2025-07-20].https://arxiv.org/abs/2506.18508.点此复制
评论