Recent advances in large language models (LLMs) and dense retrievers have driven significant progress in retrieval-augmented generation (RAG). However, existing approaches face significant challenges in complex reasoning-oriented multi-hop retrieval tasks: 1) Ineffective reasoning-oriented planning: Prior methods struggle to generate robust multi-step plans for complex queries, as rule-based decomposers perform poorly on out-of-template questions. 2) Suboptimal reasoning-driven retrieval: Related methods employ limited query reformulation, leading to iterative retrieval loops that often fail to locate golden documents. 3) Insufficient reasoning-guided filtering: Prevailing methods lack the fine-grained reasoning to effectively filter salient information from noisy results, hindering utilization of retrieved knowledge. Fundamentally, these limitations all stem from the weak coupling between retrieval and reasoning in current RAG architectures. We introduce the Orchestrated Planner-Executor Reasoning Architecture (OPERA), a novel reasoning-driven retrieval framework. OPERA's Goal Planning Module (GPM) decomposes questions into sub-goals, which are executed by a Reason-Execute Module (REM) with specialized components for precise reasoning and effective retrieval. To train OPERA, we propose Multi-Agents Progressive Group Relative Policy Optimization (MAPGRPO), a novel variant of GRPO. Experiments on complex multi-hop benchmarks show OPERA's superior performance, validating both the MAPGRPO method and OPERA's design.
翻译:近期,大型语言模型(LLMs)和密集检索器的进展推动了检索增强生成(RAG)领域的显著进步。然而,现有方法在复杂的推理导向多跳检索任务中面临重大挑战:1)推理导向规划效果不佳:先前方法难以针对复杂查询生成鲁棒的多步计划,因为基于规则的分解器在非模板化问题上表现较差。2)推理驱动检索次优:相关方法采用有限的查询重构,导致迭代检索循环常常无法定位关键文档。3)推理引导过滤不足:主流方法缺乏细粒度推理来有效过滤噪声结果中的关键信息,阻碍了检索知识的利用。从根本上说,这些局限都源于当前RAG架构中检索与推理之间的弱耦合。我们引入了编排规划器-执行器推理架构(OPERA),一种新颖的推理驱动检索框架。OPERA的目标规划模块(GPM)将问题分解为子目标,由推理-执行模块(REM)通过专门组件执行精确推理和有效检索。为训练OPERA,我们提出了多智能体渐进组相对策略优化(MAPGRPO),这是GRPO的一种新变体。在复杂多跳基准测试上的实验表明OPERA具有卓越性能,验证了MAPGRPO方法和OPERA设计的有效性。