Large vision-language models (VLMs) show strong multimodal understanding but still struggle with 3D spatial reasoning, such as distance estimation, size comparison, and cross-view consistency. Existing 3D-aware methods either depend on auxiliary 3D information or enhance RGB-only VLMs with geometry encoders through shallow feature fusion. We propose SpaceMind, a multimodal large language model explicitly designed for spatial reasoning solely from RGB inputs. The model adopts a dual-encoder architecture, integrating VGGT as a spatial understanding encoder and InternViT as a 2D visual encoder. The key idea is to treat the camera representation as an active guiding modality rather than passive metadata. Specifically, SpaceMind introduces a lightweight Camera-Guided Modality Fusion module before the language model to replace shallow fusion. It applies camera-conditioned biasing to spatial tokens, assigns query-independent weights reflecting their geometric importance, and uses the camera embedding to gate the fused representation. Empirically, SpaceMind establishes new state-of-the-art results on VSI-Bench, SQA3D and SPBench, surpassing both open and proprietary systems on VSI-Bench and SPBench by large margins and achieving state-of-the-art performance on SQA3D. These results demonstrate that camera-guided modality fusion is an effective and practical inductive bias for equipping VLMs with genuinely spatially grounded intelligence. We will release code and model checkpoints to support future research.
翻译:大型视觉语言模型(VLMs)展现出强大的多模态理解能力,但在三维空间推理方面仍存在困难,例如距离估计、尺寸比较和跨视角一致性。现有的三维感知方法要么依赖辅助三维信息,要么通过浅层特征融合将几何编码器增强仅使用RGB的VLMs。我们提出SpaceMind,一种专门为仅从RGB输入进行空间推理而设计的多模态大语言模型。该模型采用双编码器架构,集成VGGT作为空间理解编码器和InternViT作为二维视觉编码器。其核心思想是将相机表示视为主动引导模态而非被动元数据。具体而言,SpaceMind在语言模型前引入轻量级的相机引导多模态融合模块以替代浅层融合。该模块对空间令牌应用相机条件偏置,分配反映其几何重要性的查询无关权重,并利用相机嵌入对融合表示进行门控。实验表明,SpaceMind在VSI-Bench、SQA3D和SPBench上取得了新的最先进成果,在VSI-Bench和SPBench上大幅超越开源及专有系统,并在SQA3D上达到最先进性能。这些结果证明,相机引导的多模态融合是为VLMs赋予真正空间基础智能的有效且实用的归纳偏置。我们将发布代码和模型检查点以支持未来研究。