While Vision-Language Models (VLMs) are increasingly used to generate reward signals for training embodied agents to follow instructions, our research reveals that agents guided by VLM rewards often underperform compared to those employing only intrinsic (exploration-driven) rewards, contradicting expectations set by recent work. We hypothesize that false positive rewards -- instances where unintended trajectories are incorrectly rewarded -- are more detrimental than false negatives. Our analysis confirms this hypothesis, revealing that the widely used cosine similarity metric is prone to false positive reward estimates. To address this, we introduce BiMI ({Bi}nary {M}utual {I}nformation), a novel reward function designed to mitigate noise. BiMI significantly enhances learning efficiency across diverse and challenging embodied navigation environments. Our findings offer a nuanced understanding of how different types of reward noise impact agent learning and highlight the importance of addressing multimodal reward signal noise when training embodied agents


翻译:尽管视觉语言模型(VLMs)越来越多地被用于生成奖励信号,以训练具身智能体遵循指令,但我们的研究表明,相较于仅使用内在(探索驱动)奖励的智能体,由VLM奖励引导的智能体往往表现不佳,这与近期研究设定的预期相悖。我们假设虚假正向奖励——即错误地对非预期轨迹给予奖励的情况——比虚假负向奖励更具危害性。我们的分析证实了这一假设,揭示出广泛使用的余弦相似度度量容易产生虚假正向奖励估计。为解决此问题,我们提出了BiMI(二元互信息),一种旨在缓解噪声的新型奖励函数。BiMI在多种复杂具身导航环境中显著提升了学习效率。我们的研究结果提供了关于不同类型奖励噪声如何影响智能体学习的细致理解,并强调了在训练具身智能体时处理多模态奖励信号噪声的重要性。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员