3D visual grounding aims at grounding a natural language description about a 3D scene, usually represented in the form of 3D point clouds, to the targeted object region. Point clouds are sparse, noisy, and contain limited semantic information compared with 2D images. These inherent limitations make the 3D visual grounding problem more challenging. In this study, we propose 2D Semantics Assisted Training (SAT) that utilizes 2D image semantics in the training stage to ease point-cloud-language joint representation learning and assist 3D visual grounding. The main idea is to learn auxiliary alignments between rich, clean 2D object representations and the corresponding objects or mentioned entities in 3D scenes. SAT takes 2D object semantics, i.e., object label, image feature, and 2D geometric feature, as the extra input in training but does not require such inputs during inference. By effectively utilizing 2D semantics in training, our approach boosts the accuracy on the Nr3D dataset from 37.7% to 49.2%, which significantly surpasses the non-SAT baseline with the identical network architecture and inference input. Our approach outperforms the state of the art by large margins on multiple 3D visual grounding datasets, i.e., +10.4% absolute accuracy on Nr3D, +9.9% on Sr3D, and +5.6% on ScanRef.
翻译:3D 视觉地面定位旨在将3D场景的自然语言描述定位为3D场景, 通常以 3D 点云的形式向目标对象区域展示。 点云稀少、 杂音, 与 2D 图像相比, 含有有限的语义信息 。 这些固有的限制使得 3D 视觉地面定位问题更具挑战性。 在这次研究中, 我们提议 2D 语义辅助培训( SAT), 使用 2D 图像辅助语义, 在培训阶段使用 2D 图像语义, 以方便点- 球语言联合演示, 并协助 3D 视觉地面定位。 主要理念是学习3D 场景中丰富、 清洁 2D 对象表示和相应对象或提及实体之间的辅助对齐。 SAT 将 2D 对象语义信息、 即 对象标签、 图像特性和 2D 地貌特征作为培训中的额外投入, 但不需要这种投入。 通过在培训阶段有效利用 2D, 我们的方法将 Nr3D 数据集的准确性从37. 7% 提高到49 至 49., 大大超过非SAT 基线基线, 3 3 在相同的网络结构中, 3D 3+ 3 3 3 和图像图解 3 3 3 3 绝对性 。