Chain-of-thought (CoT) prompting combined with few-shot in-context learning (ICL) has unlocked significant reasoning capabilities in large language models (LLMs). However, ICL with CoT examples is ineffective on novel tasks when the pre-training knowledge is insufficient. We study this problem in a controlled setting using the CoT-ICL Lab framework, and propose meta-training techniques to learn novel abstract reasoning tasks in-context. Although CoT examples facilitate reasoning, we noticed that their excessive inclusion during meta-training degrades performance when CoT supervision is limited. To mitigate such behavior, we propose CoT-Recipe, a formal approach to modulate the mix of CoT and non-CoT examples in meta-training sequences. We demonstrate that careful modulation via CoT-Recipe can increase the accuracy of transformers on novel tasks by up to 300% even when there are no CoT examples available in-context. We confirm the broader effectiveness of these techniques by applying them to pretrained LLMs (Qwen2.5 series) for symbolic reasoning tasks and observing gains of up to 130% in accuracy.
翻译:思维链提示与少样本上下文学习相结合,显著释放了大语言模型的推理能力。然而,当预训练知识不足时,带有思维链示例的上下文学习在新任务上效果不佳。我们在受控环境中使用CoT-ICL Lab框架研究此问题,并提出元训练技术以在上下文中学习新颖的抽象推理任务。尽管思维链示例有助于推理,但我们注意到在元训练过程中过度包含这些示例会降低在思维链监督有限时的性能。为缓解此现象,我们提出CoT-Recipe,一种在元训练序列中调节思维链与非思维链示例比例的形式化方法。我们证明,即使上下文中没有思维链示例,通过CoT-Recipe的精细调节仍可将Transformer在新任务上的准确率提升高达300%。我们通过将这些技术应用于预训练大语言模型(Qwen2.5系列)进行符号推理任务,观察到准确率最高提升130%,从而验证了这些技术的广泛有效性。