Artificial intelligence in dynamic, real-world environments requires the capacity for continual learning. However, standard deep learning suffers from a fundamental issue: loss of plasticity, in which networks gradually lose their ability to learn from new data. Here we show that quantum learning models naturally overcome this limitation, preserving plasticity over long timescales. We demonstrate this advantage systematically across a broad spectrum of tasks from multiple learning paradigms, including supervised learning and reinforcement learning, and diverse data modalities, from classical high-dimensional images to quantum-native datasets. Although classical models exhibit performance degradation correlated with unbounded weight and gradient growth, quantum neural networks maintain consistent learning capabilities regardless of the data or task. We identify the origin of the advantage as the intrinsic physical constraints of quantum models. Unlike classical networks where unbounded weight growth leads to landscape ruggedness or saturation, the unitary constraints confine the optimization to a compact manifold. Our results suggest that the utility of quantum computing in machine learning extends beyond potential speedups, offering a robust pathway for building adaptive artificial intelligence and lifelong learners.
翻译:在动态、真实世界环境中的人工智能需要具备连续学习的能力。然而,标准深度学习存在一个根本性问题:塑性丧失,即网络逐渐失去从新数据中学习的能力。本文表明,量子学习模型天然地克服了这一限制,能够在长时间尺度上保持塑性。我们通过涵盖多种学习范式(包括监督学习和强化学习)以及多样化数据模态(从经典高维图像到量子原生数据集)的广泛任务谱系,系统地证明了这一优势。尽管经典模型表现出与无界权重和梯度增长相关的性能退化,量子神经网络无论面对何种数据或任务,均能保持稳定的学习能力。我们将这一优势的起源归结为量子模型固有的物理约束。与经典网络中无界权重增长导致优化景观崎岖或饱和不同,幺正约束将优化过程限制在紧致流形上。我们的研究结果表明,量子计算在机器学习中的效用不仅限于潜在的速度提升,还为构建自适应人工智能和终身学习系统提供了稳健的路径。