An inherent challenge in computing fully-explicit generalization bounds for transformers involves obtaining covering number estimates for the given transformer class $T$. Crude estimates rely on a uniform upper bound on the local-Lipschitz constants of transformers in $T$, and finer estimates require an analysis of their higher-order partial derivatives. Unfortunately, these precise higher-order derivative estimates for (realistic) transformer models are not currently available in the literature as they are combinatorially delicate due to the intricate compositional structure of transformer blocks. This paper fills this gap by precisely estimating all the higher-order derivatives of all orders for the transformer model. We consider realistic transformers with multiple (non-linearized) attention heads per block and layer normalization. We obtain fully-explicit estimates of all constants in terms of the number of attention heads, the depth and width of each transformer block, and the number of normalization layers. Further, we explicitly analyze the impact of various standard activation function choices (e.g. SWISH and GeLU). As an application, we obtain explicit pathwise generalization bounds for transformers on a single trajectory of an exponentially-ergodic Markov process valid at a fixed future time horizon. We conclude that real-world transformers can learn from $N$ (non-i.i.d.) samples of a single Markov process's trajectory at a rate of ${O}(\operatorname{polylog}(N)/\sqrt{N})$.
翻译:暂无翻译