Sampling from unnormalized target distributions is a fundamental yet challenging task in machine learning and statistics. Existing sampling algorithms typically require many iterative steps to produce high-quality samples, leading to high computational costs. We introduce one-step diffusion samplers which learn a step-conditioned ODE so that one large step reproduces the trajectory of many small ones via a state-space consistency loss. We further show that standard ELBO estimates in diffusion samplers degrade in the few-step regime because common discrete integrators yield mismatched forward/backward transition kernels. Motivated by this analysis, we derive a deterministic-flow (DF) importance weight for ELBO estimation without a backward kernel. To calibrate DF, we introduce a volume-consistency regularization that aligns the accumulated volume change along the flow across step resolutions. Our proposed sampler therefore achieves both sampling and stable evidence estimate in only one or few steps. Across challenging synthetic and Bayesian benchmarks, it achieves competitive sample quality with orders-of-magnitude fewer network evaluations while maintaining robust ELBO estimates.
翻译:从非归一化目标分布中采样是机器学习和统计学中一项基础且具有挑战性的任务。现有的采样算法通常需要多次迭代步骤才能生成高质量样本,导致计算成本高昂。我们提出了一步扩散采样器,该方法通过学习一个步长条件化的常微分方程,使得单个大步长通过状态空间一致性损失即可复现多个小步长的轨迹。我们进一步证明,扩散采样器中标准的证据下界估计在少步长机制下会退化,这是因为常见的离散积分器会产生不匹配的前向/后向转移核。基于此分析,我们推导出一种无需后向核的确定性流重要性权重用于证据下界估计。为校准该权重,我们引入了体积一致性正则化,以对齐不同步长分辨率下沿流动累积的体积变化。因此,我们提出的采样器仅需一步或少数几步即可同时实现采样和稳定的证据估计。在具有挑战性的合成与贝叶斯基准测试中,该方法以数量级更少的网络评估次数实现了具有竞争力的样本质量,同时保持了稳健的证据下界估计。