This paper addresses the current lack of a unified formal framework in machine learning theory, as well as the absence of robust theoretical foundations for interpretability and ethical safety assurance. We first construct a formal information model, employing sets of well-formed formulas (WFFs) to explicitly define the ontological states and carrier mappings for the core components of machine learning. By introducing learnable and processable predicates, as well as learning and processing functions, we analyze the logical inference and constraint rules underlying causal chains in models, thereby establishing the Machine Learning Theory Meta-Framework (MLT-MF). Building upon this framework, we propose universal definitions for model interpretability and ethical safety, and rigorously prove and validate four key theorems: the equivalence between model interpretability and information existence, the constructive formulation of ethical safety assurance and two types of total variation distance (TVD) upper bounds. This work overcomes the limitations of previous fragmented approaches, providing a unified theoretical foundation from an information science perspective to systematically address the critical challenges currently facing machine learning.
翻译:暂无翻译