Multimodal large language models (MLLMs) have demonstrated impressive capabilities in visual reasoning and text generation. While previous studies have explored the application of MLLM for detecting out-of-context (OOC) misinformation, our empirical analysis reveals two persisting challenges of this paradigm. Evaluating the representative GPT-4o model on direct reasoning and evidence augmented reasoning, results indicate that MLLM struggle to capture the deeper relationships-specifically, cases in which the image and text are not directly connected but are associated through underlying semantic links. Moreover, noise in the evidence further impairs detection accuracy. To address these challenges, we propose CMIE, a novel OOC misinformation detection framework that incorporates a Coexistence Relationship Generation (CRG) strategy and an Association Scoring (AS) mechanism. CMIE identifies the underlying coexistence relationships between images and text, and selectively utilizes relevant evidence to enhance misinformation detection. Experimental results demonstrate that our approach outperforms existing methods.
翻译:多模态大语言模型(MLLMs)在视觉推理与文本生成方面展现出卓越能力。尽管先前研究已探索将MLLM应用于检测上下文外(OOC)虚假信息,但我们的实证分析揭示了该范式仍存在两大挑战。通过对代表性GPT-4o模型进行直接推理与证据增强推理的评估,结果表明MLLM难以捕捉深层关联——特别是图像与文本未直接相连但通过潜在语义链结相关联的情形。此外,证据中的噪声进一步降低了检测准确率。为应对这些挑战,我们提出CMIE:一种融合共存关系生成(CRG)策略与关联评分(AS)机制的新型OOC虚假信息检测框架。CMIE通过识别图像与文本间的潜在共存关系,并选择性利用相关证据以增强虚假信息检测。实验结果表明,本方法优于现有检测方案。