We introduce a machine learning framework for modeling the spatio-temporal dynamics of mass-constrained complex systems with hidden states, whose behavior can, in principle, be described by PDEs but lack explicit models. The method extends the Equation-Free approach, enabling the data-driven reconstruction of reduced-order models (ROMs) without needing to identify governing equations. Using manifold learning, we obtain a latent space representation of system evolution from data via delayed coordinates, in accordance with Takens/Whitney's embedding theorems. Linear (Proper Orthogonal Decomposition, POD) and nonlinear (Diffusion Maps, DMs) methods are employed to extract low-dimensional embeddings that capture the essential dynamics. Predictive ROMs are then learned within this latent space, and their evolution is lifted back to the original high-dimensional space by solving a pre-image problem. We show that both POD and k-nearest neighbor (k-NN) lifting operators preserve mass, a key physical constraint in systems such as computational fluid dynamics and crowd dynamics. Our framework effectively reconstructs the solution operator of the underlying PDE without discovering the PDE itself, by leveraging a manifold-informed objective map that bridges multiple scales. For our illustrations, we use synthetic spatio-temporal data from the Hughes model, which couples a continuity PDE with an Eikonal equation describing optimal path selection in crowds. Results show that DM-based nonlinear embeddings outperform POD in reconstruction accuracy, producing more parsimonious and stable ROMs that remain accurate and integrable over long time horizons.
翻译:暂无翻译