The Quantum Approximate Optimization Algorithm (QAOA) is a leading approach for solving combinatorial optimization problems on near-term quantum processors. However, finding good variational parameters remains a significant challenge due to the non-convex energy landscape, often resulting in slow convergence and poor solution quality. In this work, we propose a quantum meta-learning framework that trains advanced quantum sequence models to generate effective parameter initialization policies. We investigate four classical or quantum sequence models, including the Quantum Kernel-based Long Short-Term Memory (QK-LSTM), as learned optimizers in a "learning to learn" paradigm. Our numerical experiments on the Max-Cut problem demonstrate that the QK-LSTM optimizer achieves superior performance, obtaining the highest approximation ratios and exhibiting the fastest convergence rate across all tested problem sizes (n=10 to 13). Crucially, the QK-LSTM model achieves perfect parameter transferability by synthesizing a single, fixed set of near-optimal parameters, leading to a remarkable sustained acceleration of convergence even when generalizing to larger problems. This capability, enabled by the compact and expressive power of the quantum kernel architecture, underscores its effectiveness. The QK-LSTM, with only 43 trainable parameters, substantially outperforms the classical LSTM (56 parameters) and other quantum sequence models, establishing a robust pathway toward highly efficient parameter initialization for variational quantum algorithms in the NISQ era.
翻译:量子近似优化算法(QAOA)是近期量子处理器上解决组合优化问题的主要方法。然而,由于能量景观的非凸性,寻找良好的变分参数仍是一个重大挑战,常导致收敛缓慢和解的质量不佳。本研究提出一种量子元学习框架,通过训练先进的量子序列模型来生成有效的参数初始化策略。我们探究了四种经典或量子序列模型,包括基于量子核的长短期记忆网络(QK-LSTM),作为“学会学习”范式中的学习优化器。在最大割问题上的数值实验表明,QK-LSTM优化器在所有测试问题规模(n=10至13)中均取得了最优性能,获得了最高的近似比并展现出最快的收敛速度。关键的是,QK-LSTM模型通过合成一组单一且固定的近优参数,实现了完美的参数可迁移性,即使在泛化至更大规模问题时仍能保持显著的持续收敛加速。这一能力得益于量子核架构的紧凑性和表达力,突显了其有效性。仅含43个可训练参数的QK-LSTM显著优于经典LSTM(56个参数)及其他量子序列模型,为NISQ时代变分量子算法的高效参数初始化建立了稳健的路径。