The manifold hypothesis posits that high-dimensional data typically resides on low-dimensional sub spaces. In this paper, we assume manifold hypothesis to investigate graph-based semi-supervised learning methods. In particular, we examine Laplace Learning in the Wasserstein space, extending the classical notion of graph-based semi-supervised learning algorithms from finite-dimensional Euclidean spaces to an infinite-dimensional setting. To achieve this, we prove variational convergence of a discrete graph p- Dirichlet energy to its continuum counterpart. In addition, we characterize the Laplace-Beltrami operator on asubmanifold of the Wasserstein space. Finally, we validate the proposed theoretical framework through numerical experiments conducted on benchmark datasets, demonstrating the consistency of our classification performance in high-dimensional settings.
翻译:流形假设认为高维数据通常存在于低维子空间中。本文基于流形假设研究基于图的半监督学习方法。具体而言,我们考察Wasserstein空间中的拉普拉斯学习,将经典的基于图的半监督学习算法从有限维欧几里得空间推广到无限维空间。为实现这一目标,我们证明了离散图p-狄利克雷能量到其连续对应物的变分收敛性。此外,我们刻画了Wasserstein空间子流形上的拉普拉斯-贝尔特拉米算子。最后,通过在基准数据集上进行的数值实验验证了所提出的理论框架,证明了我们的分类方法在高维场景下的性能一致性。