Bayesian methods are particularly effective for addressing inverse problems due to their ability to manage uncertainties inherent in the inference process. However, employing these methods with costly forward models poses significant challenges, especially in the context of non-differentiable models, where the absence of likelihood model gradient information can result in high computational costs. To tackle this issue, we develop a novel Bayesian inference approach based on black box variational inference, utilizing importance sampling to reuse existing simulation model calls in the variational objective gradient estimation, without relying on forward model gradients. The novelty lies in a new batch-sequential sampling procedure, which only requires new model evaluations if the currently available model evaluations fail to yield a suitable approximation of the objective gradient. The resulting approach reduces computational costs by leading to variational parameter updates without requiring new model evaluations when possible, while adaptively increasing the number of model calls per iteration as needed. In combination with its black box nature, this new approach is suitable for inverse problems involving demanding physics-based models that lack model gradients. We demonstrate the efficiency gains of the proposed method compared to its baseline version, sequential Monte Carlo, and Markov-Chain Monte Carlo in diverse benchmarks, ranging from density matching to the Bayesian calibration of a nonlinear electro-chemo-mechanical model for solid-state batteries.
翻译:暂无翻译