Large language models (LLMs) tackle complex tasks by generating long chains of thought or "reasoning traces" that act as latent variables in the generation of an output given a query. A model's ability to generate such traces can be optimized with reinforcement learning (RL) to improve their utility in predicting an answer. This optimization comes at a high computational cost, especially for narrative-related tasks that involve retrieving and processing many tokens. To this end, we propose LiteReason, a latent reasoning method that can be interleaved with standard token sampling and easily combined with RL techniques. LiteReason employs a lightweight Reasoning Projector module, trained to produce continuous latent tokens that help the model 'skip' reasoning steps. During RL, the policy model decides when to activate the projector, switching between latent and discrete reasoning as needed. Experimental results on plot hole detection and book chapter generation show that our method outperforms latent reasoning baselines and comes close to matching non-latent RL training, while reducing final reasoning length by 77-92%. Overall, LiteReason guides RL training to a more efficient part of the performance-computation tradeoff curve.
翻译:大型语言模型通过生成长链思维或“推理轨迹”来处理复杂任务,这些轨迹在给定查询生成输出时充当潜在变量。模型生成此类轨迹的能力可通过强化学习进行优化,以提高其在预测答案时的效用。这种优化通常伴随着高昂的计算成本,尤其是在涉及检索和处理大量词元的叙事相关任务中。为此,我们提出LiteReason,一种可与标准词元采样交错使用、并能轻松结合强化学习技术的潜在推理方法。LiteReason采用轻量化的推理投影器模块,该模块经训练可生成连续的潜在词元,帮助模型“跳过”推理步骤。在强化学习过程中,策略模型决定何时激活投影器,根据需要切换潜在推理与离散推理。在情节漏洞检测和书籍章节生成任务上的实验结果表明,我们的方法优于潜在推理基线,并接近非潜在强化学习训练的性能,同时将最终推理长度减少了77-92%。总体而言,LiteReason引导强化学习训练走向性能-计算权衡曲线中更高效的区域。