|国家预印本平台
首页|Interpolation-Based Gradient-Error Bounds for Use in Derivative-Free Optimization of Noisy Functions

Interpolation-Based Gradient-Error Bounds for Use in Derivative-Free Optimization of Noisy Functions

Interpolation-Based Gradient-Error Bounds for Use in Derivative-Free Optimization of Noisy Functions

来源:Arxiv_logoArxiv
英文摘要

In this paper, we analyze the accuracy of gradient estimates obtained by linear interpolation when the underlying function is subject to bounded measurement noise. The total gradient error is decomposed into a deterministic component arising from the interpolation (finite-difference) approximation, and a stochastic component due to noise. Various upper bounds for both error components are derived and compared through several illustrative examples. Our comparative study reveals that strict deterministic bounds, including those commonly used in derivative-free optimization (DFO), tend to be overly conservative. To address this, we propose approximate gradient error bounds that aim to upper bound the gradient error norm more realistically, without the excessive conservatism of classical bounds. Finally, drawing inspiration from dual real-time optimization strategies, we present a DFO scheme based on sequential programming, where the approximate gradient error bounds are enforced as constraints within the optimization problem.

Alejandro G. Marchetti、Dominique Bonvin

计算技术、计算机技术

Alejandro G. Marchetti,Dominique Bonvin.Interpolation-Based Gradient-Error Bounds for Use in Derivative-Free Optimization of Noisy Functions[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.19661.点此复制

评论