Using categorical properties of probabilistic morphisms, we prove that sequential Bayesian inversions in Bayesian supervised learning models for conditionally independent (possibly not identically distributed) data, proposed by L\^e in \cite{Le2025}, coincide with batch Bayesian inversions. Based on this result, we provide a recursive formula for posterior predictive distributions in Bayesian supervised learning. We illustrate our results with Gaussian process regressions. For Polish spaces $\mathcal{Y}$ and arbitrary sets $\mathcal{X}$, we define probability measures on $\mathcal{P} (\mathcal{Y})^{\mathcal X}$, using a projective system generated by $\mathcal{Y}$ and $\mathcal{X}$. This is a generalization of a result by Orbanz \cite{Orbanz2011} for the case $\mathcal{X}$ consisting of one point. We revisit MacEacher's Dependent Dirichlet Processes (DDP) taking values on the space $\mathcal{P} (\mathcal{Y})$ of all probability measures on a measurable subset $\mathcal{Y}$ in $\mathbf{R}^n$, considered by Barrientos-Jara-Quintana \cite{BJQ2012}. We indicate how to compute posterior distributions and posterior predictive distributions of Bayesian supervised learning models with DDP priors.
翻译:暂无翻译