Score-based Generative Models (SGMs) aim to sample from a target distribution by learning score functions using samples perturbed by Gaussian noise. Existing convergence bounds for SGMs in the W2-distance rely on stringent assumptions about the data distribution. In this work, we present a novel framework for analyzing W2-convergence in SGMs, significantly relaxing traditional assumptions such as log-concavity and score regularity. Leveraging the regularization properties of the Ornstein--Uhlenbeck (OU) process, we show that weak log-concavity of the data distribution evolves into log-concavity over time. This transition is rigorously quantified through a PDE-based analysis of the Hamilton--Jacobi--Bellman equation governing the log-density of the forward process. Moreover, we establish that the drift of the time-reversed OU process alternates between contractive and non-contractive regimes, reflecting the dynamics of concavity. Our approach circumvents the need for stringent regularity conditions on the score function and its estimators, relying instead on milder, more practical assumptions. We demonstrate the wide applicability of this framework through explicit computations on Gaussian mixture models, illustrating its versatility and potential for broader classes of data distributions.
翻译:得分生成模型(SGMs)旨在通过学习经高斯噪声扰动的样本所对应的得分函数,从目标分布中采样。现有关于SGMs在W2距离下的收敛性界依赖于对数据分布的严格假设。本研究提出了一种分析SGMs在W2距离下收敛性的新框架,显著放宽了对数凹性和得分正则性等传统假设。利用Ornstein–Uhlenbeck(OU)过程的正则化特性,我们证明了数据分布的弱对数凹性会随时间演化为对数凹性。这一转变通过基于偏微分方程分析控制前向过程对数密度的Hamilton–Jacobi–Bellman方程得到了严格量化。此外,我们确立了时间反转OU过程的漂移项在收缩与非收缩机制之间交替变化,反映了凹性动态。该方法规避了对得分函数及其估计量施加严格正则性条件的需要,转而依赖更温和、更实用的假设。我们通过对高斯混合模型进行显式计算,展示了该框架的广泛适用性,阐明了其对于更广泛数据分布类别的通用性与潜力。