In recent years, Vision-Language-Action (VLA) models in embodied intelligence have developed rapidly. However, existing adversarial attack methods require costly end-to-end training and often generate noticeable perturbation patches. To address these limitations, we propose ADVLA, a framework that directly applies adversarial perturbations on features projected from the visual encoder into the textual feature space. ADVLA efficiently disrupts downstream action predictions under low-amplitude constraints, and attention guidance allows the perturbations to be both focused and sparse. We introduce three strategies that enhance sensitivity, enforce sparsity, and concentrate perturbations. Experiments demonstrate that under an $L_{\infty}=4/255$ constraint, ADVLA combined with Top-K masking modifies less than 10% of the patches while achieving an attack success rate of nearly 100%. The perturbations are concentrated on critical regions, remain almost imperceptible in the overall image, and a single-step iteration takes only about 0.06 seconds, significantly outperforming conventional patch-based attacks. In summary, ADVLA effectively weakens downstream action predictions of VLA models under low-amplitude and locally sparse conditions, avoiding the high training costs and conspicuous perturbations of traditional patch attacks, and demonstrates unique effectiveness and practical value for attacking VLA feature spaces.
翻译:近年来,具身智能中的视觉-语言-动作(VLA)模型发展迅速。然而,现有的对抗攻击方法需要昂贵的端到端训练,且常生成明显的扰动块。为解决这些局限性,我们提出了ADVLA框架,该框架直接在视觉编码器投影至文本特征空间的特征上施加对抗扰动。ADVLA在低幅度约束下高效地干扰下游动作预测,注意力引导使扰动既集中又稀疏。我们引入了三种策略来增强敏感性、强制稀疏性并集中扰动。实验表明,在$L_{\infty}=4/255$约束下,ADVLA结合Top-K掩码修改了不到10%的图像块,同时实现了接近100%的攻击成功率。扰动集中于关键区域,在整体图像中几乎不可察觉,单步迭代仅需约0.06秒,显著优于传统的基于块的攻击方法。总之,ADVLA在低幅度和局部稀疏条件下有效削弱了VLA模型的下游动作预测,避免了传统块攻击的高训练成本和显眼扰动,并在攻击VLA特征空间方面展现出独特的有效性和实用价值。