We present a novel approach to neural representation learning that incorporates algebraic constraints inspired by Bhargava cubes from number theory. Traditional deep learning methods learn representations in unstructured latent spaces lacking interpretability and mathematical consistency. Our framework maps input data to constrained 3-dimensional latent spaces where embeddings are regularized to satisfy learned quadratic relationships derived from Bhargava's combinatorial structures. The architecture employs a differentiable auxiliary loss function operating independently of classification objectives, guiding models toward mathematically structured representations. We evaluate on MNIST, achieving 99.46% accuracy while producing interpretable 3D embeddings that naturally cluster by digit class and satisfy learned quadratic constraints. Unlike existing manifold learning approaches requiring explicit geometric supervision, our method imposes weak algebraic priors through differentiable constraints, ensuring compatibility with standard optimization. This represents the first application of number-theoretic constructs to neural representation learning, establishing a foundation for incorporating structured mathematical priors in neural networks.
翻译:我们提出了一种新颖的神经表示学习方法,该方法融入了源自数论中Bhargava立方体的代数约束。传统的深度学习方法在缺乏可解释性和数学一致性的非结构化潜在空间中学习表示。我们的框架将输入数据映射到受约束的三维潜在空间,其中嵌入通过正则化满足从Bhargava组合结构推导出的二次关系。该架构采用独立于分类目标的可微分辅助损失函数,引导模型形成具有数学结构的表示。我们在MNIST数据集上进行评估,实现了99.46%的准确率,同时生成可解释的三维嵌入,这些嵌入自然地按数字类别聚类并满足学习到的二次约束。与需要显式几何监督的现有流形学习方法不同,我们的方法通过可微分约束施加弱代数先验,确保与标准优化过程的兼容性。这标志着数论构造首次应用于神经表示学习,为在神经网络中融入结构化数学先验奠定了基础。