Time-Varying Bayesian Optimization (TVBO) is the go-to framework for optimizing a time-varying black-box objective function that may be noisy and expensive to evaluate, but its excellent empirical performance remains to be understood theoretically. Is it possible for the instantaneous regret of a TVBO algorithm to vanish asymptotically, and if so, when? We answer this question of great importance by providing upper bounds and algorithm-independent lower bounds for the cumulative regret of TVBO algorithms. In doing so, we provide important insights about the TVBO framework and derive sufficient conditions for a TVBO algorithm to have the no-regret property. To the best of our knowledge, our analysis is the first to cover all major classes of stationary kernel functions used in practice.
翻译:暂无翻译