Kernel-based learning methods such as Kernel Logistic Regression (KLR) can dramatically increase the storage capacity of Hopfield networks, but the principles governing their performance and stability remain largely uncharacterized. This paper presents a comprehensive quantitative analysis of the attractor landscape in KLR-trained networks to establish a solid foundation for their design and application. Through extensive, statistically validated simulations, we address critical questions of generality, scalability, and robustness. Our comparative analysis reveals that KLR and Kernel Ridge Regression (KRR) exhibit similarly high storage capacities and clean attractor landscapes, suggesting this is a general property of kernel regression methods, though KRR is computationally much faster. We uncover a non-trivial, scale-dependent scaling law for the kernel width ($γ$), demonstrating that optimal capacity requires $γ$ to be scaled such that $γ\times N$ increases with network size $N$. This implies that larger networks necessitate more localized kernels -- where each pattern's influence is more spatially confined -- to manage inter-pattern interference. Under this optimized scaling, we provide definitive evidence that the storage capacity scales linearly with network size ($P \propto N$). Furthermore, our sensitivity analysis shows that performance is remarkably robust to the choice of the regularization parameter $λ$. Collectively, these findings provide a clear set of empirical principles for designing high-capacity, robust associative memories and clarify the mechanisms that enable kernel methods to overcome the classical limitations of Hopfield-type models.
翻译:基于核的学习方法,如核逻辑回归(KLR),能够显著提升霍普菲尔德网络的存储容量,但其性能与稳定性的内在机制仍缺乏系统刻画。本文对KLR训练网络中的吸引子景观进行了全面的定量分析,旨在为其设计与应用奠定坚实的理论基础。通过大量经统计验证的仿真实验,我们探讨了泛化性、可扩展性与鲁棒性等关键问题。对比分析表明,KLR与核岭回归(KRR)均展现出相似的高存储容量与清晰的吸引子景观,提示这是核回归方法的普遍特性,尽管KRR在计算上更为高效。我们揭示了一个非平凡的、尺度依赖的核宽度($γ$)缩放规律:为实现最优容量,$γ$需随网络规模$N$调整,使得$γ\times N$随$N$增大而增加。这意味着更大规模的网络需要更局部化的核函数——即每个模式的影响在空间上更为受限——以管理模式间的干扰。在此优化缩放条件下,我们提供了确凿证据,证明存储容量随网络规模线性增长($P \propto N$)。此外,敏感性分析显示,性能对正则化参数$λ$的选择表现出显著的鲁棒性。综上,这些发现为设计高容量、鲁棒的联想记忆系统提供了一套明确的经验原则,并阐明了核方法如何突破传统霍普菲尔德类模型的理论限制。