We present a simple and scalable implementation of next-generation reservoir computing (NGRC) for modeling dynamical systems from time-series data. The method uses a pseudorandom nonlinear projection of time-delay embedded inputs, allowing the feature-space dimension to be chosen independently of the observation size and offering a flexible alternative to polynomial-based NGRC projections. We demonstrate the approach on benchmark tasks, including attractor reconstruction and bifurcation diagram estimation, using partial and noisy measurements. We further show that small amounts of measurement noise during training act as an effective regularizer, improving long-term autonomous stability compared to standard regression alone. Across all tests, the models remain stable over long rollouts and generalize beyond the training data. The framework offers explicit control of system state during prediction, and these properties make NGRC a natural candidate for applications such as surrogate modeling and digital-twin applications.
翻译:我们提出了一种简单且可扩展的下一代储层计算(NGRC)实现方法,用于从时间序列数据中建模动力学系统。该方法采用时滞嵌入输入的伪随机非线性投影,使得特征空间维度可以独立于观测规模进行选择,并为基于多项式的NGRC投影提供了一种灵活的替代方案。我们在基准任务上验证了该方法的有效性,包括使用部分和含噪声的测量数据进行吸引子重构和分岔图估计。我们进一步证明,训练过程中少量的测量噪声可作为有效的正则化器,相较于单独使用标准回归方法,能够提升长期自主稳定性。在所有测试中,模型在长时间推演中保持稳定,并能泛化至训练数据之外。该框架在预测期间提供了对系统状态的显式控制,这些特性使得NGRC成为替代建模和数字孪生等应用的自然候选方案。