Optimization via Strategic Law of Large Numbers
Optimization via Strategic Law of Large Numbers
This paper proposes a unified framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) under the optimal strategy for a two-armed decision model, the sample mean converges to a global optimizer under the Strategic Law of Large Numbers, and (2) a sign-based strategy built upon the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of {\bf S}trategic {\bf M}onte {\bf C}arlo {\bf O}ptimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient of the original function being optimized over (without the need of solving PDEs). While this simple strategy is not generally optimal, we show that it is sufficient for our SMCO algorithm to converge to local optimizer(s) from a single starting point, and to global optimizers under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization, and illustrate the promise of our theoretical framework and practical approach. For a wide range of test functions with challenging optimization landscapes (including ReLU neural networks with square and hinge loss), our SMCO algorithms converge to the global maximum accurately and robustly, using only a small set of starting points (at most 100 for dimensions up to 1000) and a small maximum number of iterations (200). In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.
Xiaohong Chen、Zengjing Chen、Wayne Yuan Gao、Xiaodong Yan、Guodong Zhang
计算技术、计算机技术
Xiaohong Chen,Zengjing Chen,Wayne Yuan Gao,Xiaodong Yan,Guodong Zhang.Optimization via Strategic Law of Large Numbers[EB/OL].(2025-07-18)[2025-08-23].https://arxiv.org/abs/2412.05604.点此复制
评论