Large Language Models (LLMs) excel at generating natural language answers, yet their outputs often remain unverifiable and difficult to trace. Knowledge Graphs (KGs) offer a complementary strength by representing entities and their relationships in structured form, providing a foundation for more reliable reasoning. We propose a novel framework that integrates LLM reasoning with KGs by linking each step of the reasoning process to graph-structured data. This grounding turns intermediate ``thoughts'' into interpretable traces that remain consistent with external knowledge. Our approach incorporates multiple reasoning strategies, Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT), and is evaluated on GRBench, a benchmark for domain-specific graph reasoning. Our experiments show state-of-the-art (SOTA) performance, with at least 26.5\% improvement over CoT baselines. Beyond accuracy, we analyze how step depth, branching structure, and model size influence reasoning quality, offering insights into the conditions that support effective reasoning. Together, these contributions highlight how grounding LLMs in structured knowledge enables both higher accuracy and greater interpretability in complex reasoning tasks.
翻译:大语言模型(LLMs)在生成自然语言回答方面表现出色,但其输出往往难以验证且不易追溯。知识图谱(KGs)通过以结构化形式表示实体及其关系,提供了互补优势,为更可靠的推理奠定了基础。我们提出了一种新颖的框架,通过将推理过程的每一步与图结构数据相连接,将LLM推理与KGs相结合。这种基础化将中间“思考”转化为可解释的轨迹,并与外部知识保持一致。我们的方法融合了多种推理策略,包括思维链(CoT)、思维树(ToT)和思维图(GoT),并在领域特定图推理基准GRBench上进行了评估。实验结果表明,我们的方法取得了最先进的性能,较CoT基线至少提升了26.5%。除了准确性外,我们还分析了推理步数深度、分支结构和模型规模如何影响推理质量,为支持有效推理的条件提供了见解。这些成果共同表明,将LLMs基于结构化知识,能够在复杂推理任务中实现更高的准确性和更强的可解释性。