Large Vision-Language Models (LVLMs) often suffer from object hallucination, generating text inconsistent with visual inputs, which can critically undermine their reliability. Existing inference-time interventions to mitigate this issue present a challenging trade-off: while methods that steer internal states or adjust output logits can be effective, they often incur substantial computational overhead, typically requiring extra forward passes. This efficiency bottleneck can limit their practicality for real-world, latency-sensitive deployments. In this work, we aim to address this trade-off with Residual-Update Directed DEcoding Regulation (RUDDER), a low-overhead framework that steers LVLMs towards visually-grounded generation. RUDDER is built on two key innovations: (1) Contextual Activation Residual Direction (CARD) vector, a per-sample visual evidence vector extracted from the residual update of a self-attention layer during a single, standard forward pass. (2) A Bayesian-inspired adaptive gate that performs token-wise injection, applying a corrective signal whose strength is conditioned on the model's deviation from the visual context. Extensive experiments on key hallucination benchmarks, including POPE and CHAIR, indicate that RUDDER achieves performance comparable to state-of-the-art methods while introducing negligible computational latency, validating RUDDER as a pragmatic and effective approach for improving LVLMs' reliability without a significant compromise on efficiency.
翻译:大型视觉语言模型(LVLMs)常受物体幻觉问题困扰,即生成与视觉输入不一致的文本,这可能严重损害其可靠性。现有缓解该问题的推理时干预方法面临一个棘手的权衡:虽然通过引导内部状态或调整输出逻辑的方法可能有效,但它们通常带来显著的计算开销,往往需要额外的前向传播过程。这种效率瓶颈可能限制其在现实世界中对延迟敏感场景的实用性。本研究旨在通过残差更新导向解码调控框架(RUDDER)解决这一权衡问题,该低开销框架可将LVLMs引导至视觉接地的生成过程。RUDDER基于两项关键创新:(1)上下文激活残差方向向量(CARD),这是通过在单次标准前向传播过程中,从自注意力层的残差更新中提取的每样本视觉证据向量;(2)受贝叶斯启发的自适应门控机制,执行词元级注入操作,根据模型偏离视觉上下文的条件动态调整矫正信号的强度。在包括POPE和CHAIR在内的关键幻觉基准测试上的广泛实验表明,RUDDER在实现与最先进方法相当性能的同时,仅引入可忽略的计算延迟,验证了该框架作为一种在不显著牺牲效率前提下提升LVLMs可靠性的实用且有效的解决方案。