Global optimization of expensive, derivative-free black-box functions demands extreme sample efficiency. Classical methods such as Bayesian Optimization (BO) can be effective, but they often require careful parameter tuning to each application domain. At the same time, Large Language Models (LLMs) have shown broad capabilities, yet state-of-the-art models remain limited in solving continuous black-box optimization tasks. We introduce GPTOpt, an LLM-based optimization method that equips LLMs with continuous black-box optimization capabilities. By fine-tuning large language models on extensive synthetic datasets derived from diverse BO parameterizations, GPTOpt leverages LLM pre-training to generalize across optimization tasks. On a variety of black-box optimization benchmarks, GPTOpt surpasses traditional optimizers, highlighting the capacity of LLMs for advanced numerical reasoning and introducing a flexible framework for global optimization without parameter tuning.
翻译:对昂贵、无导数黑盒函数进行全局优化需要极高的样本效率。经典方法如贝叶斯优化(BO)可能有效,但它们通常需要对每个应用领域进行细致的参数调优。与此同时,大语言模型(LLMs)已展现出广泛的能力,但最先进的模型在解决连续黑盒优化任务方面仍存在局限。我们提出了GPTOpt,一种基于LLM的优化方法,赋予LLMs连续黑盒优化的能力。通过在源自多样化BO参数化的大量合成数据集上对大语言模型进行微调,GPTOpt利用LLM的预训练能力实现跨优化任务的泛化。在多种黑盒优化基准测试中,GPTOpt超越了传统优化器,突显了LLMs在高级数值推理方面的潜力,并引入了一个无需参数调优的灵活全局优化框架。