Logical reasoning is a core challenge in natural language understanding and a fundamental capability of artificial intelligence, underpinning scientific discovery, mathematical theorem proving, and complex decision-making. Despite the remarkable progress of large language models (LLMs), most current approaches still rely on forward reasoning paradigms, generating step-by-step rationales from premises to conclusions. However, such methods often suffer from redundant inference paths, hallucinated steps, and semantic drift, resulting in inefficient and unreliable reasoning. In this paper, we propose a novel framework, Hypothesis-driven Backward Logical Reasoning (HBLR). The core idea is to integrate confidence-aware symbolic translation with hypothesis-driven backward reasoning. In the translation phase, only high-confidence spans are converted into logical form, such as First-Order Logic (FOL), while uncertain content remains in natural language. A translation reflection module further ensures semantic fidelity by evaluating symbolic outputs and reverting lossy ones back to text when necessary. In the reasoning phase, HBLR simulates human deductive thinking by assuming the conclusion is true and recursively verifying its premises. A reasoning reflection module further identifies and corrects flawed inference steps, enhancing logical coherence. Extensive experiments on five reasoning benchmarks demonstrate that HBLR consistently outperforms strong baselines in both accuracy and efficiency.
翻译:逻辑推理是自然语言理解的核心挑战,也是人工智能的基础能力,支撑着科学发现、数学定理证明和复杂决策。尽管大语言模型取得了显著进展,但当前大多数方法仍依赖于前向推理范式,即从前提逐步生成推演至结论的推理链。然而,此类方法常存在推理路径冗余、步骤幻觉和语义漂移等问题,导致推理效率低下且不可靠。本文提出了一种新颖的框架——假设驱动的逆向逻辑推理。其核心思想是将置信度感知的符号翻译与假设驱动的逆向推理相结合。在翻译阶段,仅将高置信度的文本片段转换为如一阶逻辑等逻辑形式,而不确定内容则保留为自然语言。翻译反思模块通过评估符号输出并在必要时将有损转换回退至文本,进一步确保语义保真度。在推理阶段,该框架通过假设结论为真并递归验证其前提,模拟人类的演绎思维。推理反思模块进一步识别并修正有缺陷的推理步骤,从而增强逻辑连贯性。在五个推理基准上的大量实验表明,该框架在准确性和效率上均持续优于强基线模型。