Kernel-based learning methods such as Kernel Logistic Regression (KLR) can substantially increase the storage capacity of Hopfield networks, but the principles governing their performance and stability remain largely uncharacterized. This paper presents a comprehensive quantitative analysis of the attractor landscape in KLR-trained networks to establish a solid foundation for their design and application. Through extensive, statistically validated simulations, we address critical questions of generality, scalability, and robustness. Our comparative analysis shows that KLR and Kernel Ridge Regression (KRR) exhibit similarly high storage capacities and clean attractor landscapes under typical operating conditions, suggesting that this behavior is a general property of kernel regression methods, although KRR is computationally much faster. We identify a non-trivial, scale-dependent law for the kernel width $γ$, demonstrating that optimal capacity requires $γ$ to be scaled such that $γN$ increases with network size $N$. This finding implies that larger networks require more localized kernels, in which each pattern's influence is more spatially confined, to mitigate inter-pattern interference. Under this optimized scaling, we provide clear evidence that storage capacity scales linearly with network size~($P \propto N$). Furthermore, our sensitivity analysis shows that performance is remarkably robust with respect to the choice of the regularization parameter $λ$. Collectively, these findings provide a concise set of empirical principles for designing high-capacity and robust associative memories and clarify the mechanisms that enable kernel methods to overcome the classical limitations of Hopfield-type models.
翻译:基于核的学习方法,如核逻辑回归(KLR),可显著提升Hopfield网络的存储容量,但其性能与稳定性的内在原理在很大程度上仍未得到充分表征。本文对KLR训练网络中的吸引子景观进行了全面的定量分析,旨在为其设计与应用奠定坚实的理论基础。通过大量经统计验证的仿真实验,我们探讨了其泛化性、可扩展性与鲁棒性等关键问题。对比分析表明,在典型运行条件下,KLR与核岭回归(KRR)均展现出相似的高存储容量与清晰的吸引子景观,这表明该行为是核回归方法的普遍特性,尽管KRR在计算上更为高效。我们发现了核宽度$γ$的一个非平凡且依赖于尺度的规律:为实现最优容量,$γ$需按$γN$随网络规模$N$增大的方式进行调整。这一发现意味着,更大规模的网络需要更局部化的核函数,即每个模式的影响在空间上更为受限,以减轻模式间的相互干扰。在此优化尺度下,我们提供了明确证据表明存储容量与网络规模呈线性关系($P \propto N$)。此外,敏感性分析显示,性能对于正则化参数$λ$的选择表现出显著的鲁棒性。综合而言,这些发现为设计高容量、鲁棒的联想记忆系统提供了一套简洁的经验原则,并阐明了核方法如何突破Hopfield类模型的经典局限性。