Kolmogorov-Arnold Networks have emerged as interpretable alternatives to traditional multi-layer perceptrons. However, standard implementations lack principled uncertainty quantification capabilities essential for many scientific applications. We present a framework integrating sparse variational Gaussian process inference with the Kolmogorov-Arnold topology, enabling scalable Bayesian inference with computational complexity quasi-linear in sample size. Through analytic moment matching, we propagate uncertainty through deep additive structures while maintaining interpretability. We use three example studies to demonstrate the framework's ability to distinguish aleatoric from epistemic uncertainty: calibration of heteroscedastic measurement noise in fluid flow reconstruction, quantification of prediction confidence degradation in multi-step forecasting of advection-diffusion dynamics, and out-of-distribution detection in convolutional autoencoders. These results suggest Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) is a promising architecture for uncertainty-aware learning in scientific machine learning.
翻译:Kolmogorov-Arnold网络已成为传统多层感知器的可解释替代方案。然而,标准实现缺乏对许多科学应用至关重要的原则性不确定性量化能力。本文提出了一种将稀疏变分高斯过程推断与Kolmogorov-Arnold拓扑结构相结合的框架,实现了计算复杂度与样本量拟线性的可扩展贝叶斯推断。通过解析矩匹配方法,我们在保持可解释性的同时,将不确定性传播至深层加性结构中。我们通过三个案例研究展示了该框架区分偶然不确定性与认知不确定性的能力:流体流动重建中异方差测量噪声的校准、对流-扩散动力学多步预测中置信度衰减的量化,以及卷积自编码器的分布外检测。这些结果表明,稀疏变分高斯过程Kolmogorov-Arnold网络(SVGP KAN)是科学机器学习中具备不确定性感知能力的有前景的架构。