Geospatial sensor data is essential for modern defense and security, offering indispensable 3D information for situational awareness. This data, gathered from sources like lidar sensors and optical cameras, allows for the creation of detailed models of operational environments. In this paper, we provide a comparative analysis of traditional representation methods, such as point clouds, voxel grids, and triangle meshes, alongside modern neural and implicit techniques like Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS). Our evaluation reveals a fundamental trade-off: traditional models offer robust geometric accuracy ideal for functional tasks like line-of-sight analysis and physics simulations, while modern methods excel at producing high-fidelity, photorealistic visuals but often lack geometric reliability. Based on these findings, we conclude that a hybrid approach is the most promising path forward. We propose a system architecture that combines a traditional mesh scaffold for geometric integrity with a neural representation like 3DGS for visual detail, managed within a hierarchical scene structure to ensure scalability and performance.
翻译:地理空间传感器数据对于现代国防与安全至关重要,为态势感知提供了不可或缺的三维信息。这类数据通过激光雷达传感器和光学相机等来源采集,可用于构建作战环境的精细模型。本文对传统表示方法(如点云、体素网格和三角网格)与现代神经及隐式技术(如神经辐射场(NeRFs)和三维高斯泼溅(3DGS))进行了比较分析。我们的评估揭示了一个根本性的权衡:传统模型提供稳健的几何精度,适用于视线分析和物理仿真等功能性任务;而现代方法擅长生成高保真、逼真的视觉效果,但往往缺乏几何可靠性。基于这些发现,我们得出结论:混合方法是最具前景的发展路径。我们提出一种系统架构,将用于几何完整性的传统网格骨架与用于视觉细节的神经表示(如3DGS)相结合,并在分层场景结构中进行管理,以确保可扩展性和性能。