Existing multilevel quasi-Monte Carlo (MLQMC) methods often rely on multiple independent randomizations of a low-discrepancy (LD) sequence to estimate statistical errors on each level. While this approach is standard, it can be less efficient than simply increasing the number of points from a single LD sequence. However, a single LD sequence does not permit statistical error estimates in the current framework. We propose to recast the MLQMC problem in a Bayesian cubature framework, which uses a single LD sequence and quantifies numerical error through the posterior variance of a Gaussian process (GP) model. When paired with certain LD sequences, GP regression and hyperparameter optimization can be carried out at only $\mathcal{O}(n \log n)$ cost, where $n$ is the number of samples. Building on the adaptive sample allocation used in traditional MLQMC, where the number of samples is doubled on the level with the greatest expected benefit, we introduce a new Bayesian utility function that balances the computational cost of doubling against the anticipated reduction in posterior uncertainty. We also propose a new digitally-shift-invariant (DSI) kernel of adaptive smoothness, which combines multiple higher-order DSI kernels through a weighted sum of smoothness parameters, for use with fast digital net GPs. A series of numerical experiments illustrate the performance of our fast Bayesian MLQMC method and error estimates for both single-level problems and multilevel problems with a fixed number of levels. The Bayesian error estimates obtained using digital nets are found to be reliable, although, in some cases, mildly conservative.
翻译:暂无翻译