This paper proposes a unified framework for the global optimization of a continuous function in a bounded rectangular domain. Specifically, we show that: (1) under the optimal strategy for a two-armed decision model, the sample mean converges to a global optimizer under the Strategic Law of Large Numbers, and (2) a sign-based strategy built upon the solution of a parabolic PDE is asymptotically optimal. Motivated by this result, we propose a class of {\bf S}trategic {\bf M}onte {\bf C}arlo {\bf O}ptimization (SMCO) algorithms, which uses a simple strategy that makes coordinate-wise two-armed decisions based on the signs of the partial gradient of the original function being optimized over (without the need of solving PDEs). While this simple strategy is not generally optimal, we show that it is sufficient for our SMCO algorithm to converge to local optimizer(s) from a single starting point, and to global optimizers under a growing set of starting points. Numerical studies demonstrate the suitability of our SMCO algorithms for global optimization, and illustrate the promise of our theoretical framework and practical approach. For a wide range of test functions with challenging optimization landscapes (including ReLU neural networks with square and hinge loss), our SMCO algorithms converge to the global maximum accurately and robustly, using only a small set of starting points (at most 100 for dimensions up to 1000) and a small maximum number of iterations (200). In fact, our algorithms outperform many state-of-the-art global optimizers, as well as local algorithms augmented with the same set of starting points as ours.
翻译:本文提出了一种在有限矩形域内对连续函数进行全局优化的统一框架。具体而言,我们证明:(1) 在双臂决策模型的最优策略下,样本均值依据战略大数定律收敛于全局最优解;(2) 基于抛物型偏微分方程解构建的符号策略是渐近最优的。受此结果启发,我们提出一类战略蒙特卡洛优化算法,该算法采用一种简单策略,基于待优化函数偏梯度的符号进行坐标方向的双臂决策(无需求解偏微分方程)。虽然这一简单策略通常并非最优,但我们证明其足以使SMCO算法从单一初始点收敛至局部最优解,并在初始点集逐步扩展的条件下收敛至全局最优解。数值研究表明,我们的SMCO算法适用于全局优化问题,并验证了理论框架与实践方法的有效性。针对具有挑战性优化景观的多种测试函数(包括采用平方损失与铰链损失的ReLU神经网络),我们的SMCO算法仅需少量初始点(维度不超过1000时最多使用100个)和有限最大迭代次数(200次),即可准确鲁棒地收敛至全局最大值。实际上,该算法在性能上超越了多种前沿全局优化器,以及使用相同初始点集的增强型局部优化算法。