Image-only and pseudo-LiDAR representations are commonly used for monocular 3D object detection. However, methods based on them have shortcomings of either not well capturing the spatial relationships in neighbored image pixels or being hard to handle the noisy nature of the monocular pseudo-LiDAR point cloud. To overcome these issues, in this paper we propose a novel object-centric voxel representation tailored for monocular 3D object detection. Specifically, voxels are built on each object proposal, and their sizes are adaptively determined by the 3D spatial distribution of the points, allowing the noisy point cloud to be organized effectively within a voxel grid. This representation is proved to be able to locate the object in 3D space accurately. Furthermore, prior works would like to estimate the orientation via deep features extracted from an entire image or a noisy point cloud. By contrast, we argue that the local RoI information from the object image patch alone with a proper resizing scheme is a better input as it provides complete semantic clues meanwhile excludes irrelevant interferences. Besides, we decompose the confidence mechanism in monocular 3D object detection by considering the relationship between 3D objects and the associated 2D boxes. Evaluated on KITTI, our method outperforms state-of-the-art methods by a large margin. The code will be made publicly available soon.
翻译:以图像为主和伪LiDAR 表示方式通常用于单立体 3D 对象探测。 但是, 以它们为基础的方法有缺陷, 要么没有很好地捕捉到相邻图像像素中的空间关系, 要么很难处理单立体伪LiDAR 点云的噪音。 要克服这些问题, 我们在本文件中提议为单立体 3D 对象探测而专门设计一个新颖的以对象为中心的 voxel 表示方式。 具体地说, voxel 是在每个对象提案上建起的, 其大小由点的 3D 空间分布决定, 使得振动点云能够在 voxel 网中有效组织起来。 这个表示方式被证明能够准确地定位在 3D 空间中的物体。 此外, 之前的工作会想通过从整个图像中提取的深度特征或噪声点云来估计该物体的方向。 相比之下, 我们说, 本地的 RoI 信息是更好的输入方式, 因为它提供了完整的语义线索, 同时排除了不相关的干扰。 此外, 我们很快将信任机制放在立点 3D 对象探测中, 通过考虑3D 的大型的边框中, 将用3D 的边框来评估 。