Neural signed distance functions (SDFs) have been a vital representation to represent 3D shapes or scenes with neural networks. An SDF is an implicit function that can query signed distances at specific coordinates for recovering a 3D surface. Although implicit functions work well on a single shape or scene, they pose obstacles when analyzing multiple SDFs with high-fidelity geometry details, due to the limited information encoded in the latent space for SDFs and the loss of geometry details. To overcome these obstacles, we introduce a method to represent multiple SDFs in a common space, aiming to recover more high-fidelity geometry details with more compact latent representations. Our key idea is to take full advantage of the benefits of generalization-based and overfitting-based learning strategies, which manage to preserve high-fidelity geometry details with compact latent codes. Based on this framework, we also introduce a novel sampling strategy to sample training queries. The sampling can improve the training efficiency and eliminate artifacts caused by the influence of other SDFs. We report numerical and visual evaluations on widely used benchmarks to validate our designs and show advantages over the latest methods in terms of the representative ability and compactness.
翻译:神经符号距离函数(SDFs)已成为通过神经网络表示三维形状或场景的重要表征方法。SDF是一种隐式函数,可在特定坐标处查询符号距离以重建三维表面。尽管隐式函数在单个形状或场景上表现良好,但在分析多个具有高保真几何细节的SDF时存在障碍,这源于SDF潜在空间编码信息有限及几何细节的损失。为克服这些障碍,我们提出了一种在共享空间中表示多个SDF的方法,旨在通过更紧凑的潜在表征恢复更多高保真几何细节。我们的核心思想是充分利用基于泛化与基于过拟合的学习策略的优势,这两种策略能够通过紧凑的潜在编码保留高保真几何细节。基于此框架,我们还引入了一种新颖的采样策略来选取训练查询点。该采样方法可提升训练效率,并消除其他SDF影响导致的伪影。我们在广泛使用的基准数据集上进行了数值与视觉评估,以验证设计有效性,并在表征能力与紧凑性方面展示了相较于最新方法的优势。