Bayesian optimisation is an adaptive sampling strategy for constructing a Gaussian process surrogate to efficiently search for the global minimum of a black-box computational model. Gaussian processes have limited applicability in engineering design problems, which usually have many design variables but typically a low intrinsic dimensionality. Their scalability can be significantly improved by identifying a low-dimensional space of latent variables that serve as inputs to the Gaussian process. In this paper, we introduce a multi-view learning strategy that considers both the input design variables and output data representing the objective or constraint functions, to identify a low-dimensional latent subspace. Adopting a fully probabilistic viewpoint, we use probabilistic partial least squares (PPLS) to learn an orthogonal mapping from the design variables to the latent variables using training data consisting of inputs and outputs of the black-box computational model. The latent variables and posterior probability densities of the PPLS and Gaussian process models are determined sequentially and iteratively, with retraining occurring at each adaptive sampling iteration. We compare the proposed probabilistic partial least squares Bayesian optimisation (PPLS-BO) strategy with its deterministic counterpart, partial least squares Bayesian optimisation (PLS-BO), and classical Bayesian optimisation, demonstrating significant improvements in convergence to the global minimum.
翻译:贝叶斯优化是一种自适应采样策略,通过构建高斯过程代理模型来高效搜索黑箱计算模型的全局最小值。高斯过程在工程设计中应用受限,因为此类问题通常具有大量设计变量但本质维度较低。通过识别一个低维潜在变量空间作为高斯过程的输入,其可扩展性可显著提升。本文提出一种多视图学习策略,同时考虑输入设计变量和代表目标或约束函数的输出数据,以识别低维潜在子空间。采用完全概率化视角,我们利用概率偏最小二乘法(PPLS),基于黑箱计算模型的输入输出训练数据,学习从设计变量到潜在变量的正交映射。PPLS与高斯过程模型的潜在变量及后验概率密度通过顺序迭代方式确定,并在每次自适应采样迭代中重新训练。我们将提出的概率偏最小二乘贝叶斯优化(PPLS-BO)策略与其确定性对应方法——偏最小二乘贝叶斯优化(PLS-BO)以及经典贝叶斯优化进行比较,证明了该方法在收敛至全局最小值方面具有显著改进。