Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an object hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. We split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are not only smooth but also plausible and geometrically consistent with the scene. Our method offers competitive performances to the other state-of-the-art models, and it enables a~plethora of novel applications.
翻译:以现代登记设备扫描真实生活中的场景,通常会产生不完全的点云表,这主要是因为扫描过程和3D隔离的局限性。因此,完成这种部分表示仍然是许多计算机视觉应用的根本挑战。大多数现有办法的目的是通过学习在一个与现实情景相去甚远的未封闭环境中的合成结构中重建单个的三维天体来解决这个问题。在这项工作中,我们将点云完成问题重新配置为对象幻觉任务。因此,我们引入了一种新型的以自动编码器为基础的结构,叫做超视盘,它能够分解潜在的表达方式,从而能够生成完成的三维点云的多种变体。我们将点云处理分成两个脱节的数据流,并利用超网络模式填补被遗漏的天体,即被遗漏的空洞口袋。结果就是,产生的点云不仅光滑,而且与景象相近。我们的方法为其他状态的模型提供了竞争性性能,它能够产生新的应用。