The Vision-Language-Action (VLA) models have demonstrated remarkable performance on embodied tasks and shown promising potential for real-world applications. However, current VLAs still struggle to produce consistent and precise target-oriented actions, as they often generate redundant or unstable motions along trajectories, limiting their applicability in time-sensitive scenarios.In this work, we attribute these redundant actions to the spatially uniform perception field of existing VLAs, which causes them to be distracted by target-irrelevant objects, especially in complex environments.To address this issue, we propose an efficient PosA-VLA framework that anchors visual attention via pose-conditioned supervision, consistently guiding the model's perception toward task-relevant regions. The pose-conditioned anchor attention mechanism enables the model to better align instruction semantics with actionable visual cues, thereby improving action generation precision and efficiency. Moreover, our framework adopts a lightweight architecture and requires no auxiliary perception modules (e.g., segmentation or grounding networks), ensuring efficient inference. Extensive experiments verify that our method executes embodied tasks with precise and time-efficient behavior across diverse robotic manipulation benchmarks and shows robust generalization in a variety of challenging environments.
翻译:视觉-语言-动作(VLA)模型在具身任务中展现出卓越性能,并在实际应用中显示出巨大潜力。然而,现有VLA模型仍难以生成一致且精确的目标导向动作,其常沿轨迹产生冗余或不稳定的运动,限制了在时间敏感场景下的适用性。本研究将这些冗余动作归因于现有VLA模型空间均匀的感知场,导致其易受目标无关物体干扰,在复杂环境中尤为明显。为解决该问题,我们提出高效的PosA-VLA框架,通过姿态条件监督锚定视觉注意力,持续引导模型感知任务相关区域。姿态条件锚定注意力机制使模型能更好地对齐指令语义与可操作的视觉线索,从而提升动作生成的精确度与效率。此外,本框架采用轻量级架构,无需辅助感知模块(如分割或定位网络),确保高效推理。大量实验验证表明,该方法在多样化机器人操作基准测试中能以精确且省时的行为执行具身任务,并在各类挑战性环境中展现出鲁棒的泛化能力。