Focusing on implicit neural representations, we present a novel in situ training protocol that employs limited memory buffers of full and sketched data samples, where the sketched data are leveraged to prevent catastrophic forgetting. The theoretical motivation for our use of sketching as a regularizer is presented via a simple Johnson-Lindenstrauss-informed result. While our methods may be of wider interest in the field of continual learning, we specifically target in situ neural compression using implicit neural representation-based hypernetworks. We evaluate our method on a variety of complex simulation data in two and three dimensions, over long time horizons, and across unstructured grids and non-Cartesian geometries. On these tasks, we show strong reconstruction performance at high compression rates. Most importantly, we demonstrate that sketching enables the presented in situ scheme to approximately match the performance of the equivalent offline method.
翻译:聚焦于隐式神经表示,我们提出了一种新颖的原位训练协议,该协议利用完整数据样本与草图数据样本的有限内存缓冲区,其中草图数据被用于防止灾难性遗忘。我们通过一个基于Johnson-Lindenstrauss引理的简明理论结果,阐述了将草图技术作为正则化器的理论动机。尽管我们的方法在持续学习领域可能具有更广泛的适用性,但本研究特别针对基于隐式神经表示的超网络进行原位神经压缩。我们在二维和三维复杂模拟数据上,跨越长时间序列、非结构化网格及非笛卡尔几何结构等多种场景评估了该方法。在这些任务中,我们的方法在高压缩率下展现出优异的重建性能。最重要的是,我们证明了草图技术能使所提出的原位训练方案近似达到等效离线方法的性能水平。