Fake news detection methods based on writing style have achieved remarkable progress. However, as adversaries increasingly imitate the style of authentic news, the effectiveness of such approaches is gradually diminishing. Recent research has explored incorporating large language models (LLMs) to enhance fake news detection. Yet, despite their transformative potential, LLMs remain an untapped goldmine for fake news detection, with their real-world adoption hampered by shallow functionality exploration, ambiguous usability, and prohibitive inference costs. In this paper, we propose a novel fake news detection framework, dubbed FactGuard, that leverages LLMs to extract event-centric content, thereby reducing the impact of writing style on detection performance. Furthermore, our approach introduces a dynamic usability mechanism that identifies contradictions and ambiguous cases in factual reasoning, adaptively incorporating LLM advice to improve decision reliability. To ensure efficiency and practical deployment, we employ knowledge distillation to derive FactGuard-D, enabling the framework to operate effectively in cold-start and resource-constrained scenarios. Comprehensive experiments on two benchmark datasets demonstrate that our approach consistently outperforms existing methods in both robustness and accuracy, effectively addressing the challenges of style sensitivity and LLM usability in fake news detection.
翻译:基于写作风格的虚假新闻检测方法已取得显著进展。然而,随着攻击者日益模仿真实新闻的风格,此类方法的有效性正逐渐减弱。近期研究探索引入大语言模型(LLMs)以增强虚假新闻检测能力。尽管LLMs具有变革性潜力,但其在虚假新闻检测领域仍是一座未充分开发的金矿,实际应用受限于功能探索浅层化、可用性模糊及高昂推理成本。本文提出一种新颖的虚假新闻检测框架FactGuard,该框架利用LLMs提取以事件为中心的内容,从而降低写作风格对检测性能的影响。此外,我们引入动态可用性机制,通过识别事实推理中的矛盾与模糊案例,自适应融合LLM建议以提升决策可靠性。为确保效率与实用部署,我们采用知识蒸馏技术衍生出FactGuard-D,使该框架能在冷启动与资源受限场景中有效运行。在两个基准数据集上的综合实验表明,我们的方法在鲁棒性与准确性上均持续优于现有方法,有效应对了虚假新闻检测中风格敏感性与LLM可用性的挑战。