Gradient-based Sample Selection for Faster Bayesian Optimization
Gradient-based Sample Selection for Faster Bayesian Optimization
Bayesian optimization (BO) is an effective technique for black-box optimization. However, its applicability is typically limited to moderate-budget problems due to the cubic complexity in computing the Gaussian process (GP) surrogate model. In large-budget scenarios, directly employing the standard GP model faces significant challenges in computational time and resource requirements. In this paper, we propose a novel approach, gradient-based sample selection Bayesian Optimization (GSSBO), to enhance the computational efficiency of BO. The GP model is constructed on a selected set of samples instead of the whole dataset. These samples are selected by leveraging gradient information to maintain diversity and representation. We provide a theoretical analysis of the gradient-based sample selection strategy and obtain explicit sublinear regret bounds for our proposed framework. Extensive experiments on synthetic and real-world tasks demonstrate that our approach significantly reduces the computational cost of GP fitting in BO while maintaining optimization performance comparable to baseline methods.
Qiyu Wei、Haowei Wang、Zirui Cao、Songhao Wang、Richard Allmendinger、Mauricio A álvarez
计算技术、计算机技术
Qiyu Wei,Haowei Wang,Zirui Cao,Songhao Wang,Richard Allmendinger,Mauricio A álvarez.Gradient-based Sample Selection for Faster Bayesian Optimization[EB/OL].(2025-04-10)[2025-04-26].https://arxiv.org/abs/2504.07742.点此复制
评论