Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
翻译:高斯过程潜变量模型(GPLVMs)因其灵活性和非线性特性,在无监督任务(如降维和缺失数据恢复)中日益受到关注。为获得更紧致的变分下界,已有研究提出了贝叶斯GPLVMs的重要性加权版本。然而,该版本方法主要局限于分析简单数据结构,因为在高维空间或复杂数据集中生成有效的提议分布可能极具挑战性。本研究提出一种退火重要性采样(AIS)方法以解决这些问题。通过退火技术将后验分布转化为一系列中间分布,我们结合了序贯蒙特卡罗采样器与变分推断的优势,以探索更广泛的后验分布并逐步逼近目标分布。我们进一步通过对证据下界(ELBO)中所有变量进行重参数化,提出一种高效算法。在玩具数据集和图像数据集上的实验结果表明,本方法在变分下界紧致性、对数似然值提升和收敛鲁棒性方面均优于现有先进方法。