While scaling laws for large language models (LLMs) during pre-training have been extensively studied, their behavior under reinforcement learning (RL) post-training remains largely unexplored. This paper presents a systematic empirical investigation of scaling behaviors in RL-based post-training, with a particular focus on mathematical reasoning. Based on a set of experiments across the full Qwen2.5 dense model series (0.5B to 72B), we characterize how model scale, data volume, and computational budget interact to shape performance. Our analysis leads to four key findings: 1.Larger models consistently exhibit superior learning efficiency on both compute and data metrics. 2.The relationship between test loss, compute, and data can be modeled by a predictive power-law which is robust across both base and instruction-tuned models. 3.Although larger models exhibit higher learning efficiency, the analytical learning efficiency term k(N) in the power-law reveals a latent saturation trend in learning efficiency as model size continues to increase. 4.In data-constrained regimes, repeated reuse of high-quality data proves highly effective, as final performance is primarily governed by the total number of optimization steps rather than the uniqueness of samples. Collectively, these results provide a principled foundation and practical guidelines for efficiently scaling the reasoning capabilities of LLMs through RL post-training.
翻译:尽管大语言模型在预训练阶段的缩放规律已得到广泛研究,但其在强化学习后训练中的行为仍鲜有探索。本文对基于强化学习的后训练中的缩放行为进行了系统的实证研究,特别聚焦于数学推理任务。基于对完整Qwen2.5稠密模型系列(0.5B至72B)的一系列实验,我们刻画了模型规模、数据量和计算预算如何相互作用以影响性能。我们的分析得出四个关键发现:1. 更大规模的模型在计算和数据指标上始终表现出更优的学习效率。2. 测试损失、计算量和数据量之间的关系可通过预测性幂律建模,该规律在基础模型和指令微调模型中均保持稳健。3. 尽管更大模型展现出更高的学习效率,幂律中的解析学习效率项k(N)揭示了随着模型规模持续增大,学习效率存在潜在的饱和趋势。4. 在数据受限的情况下,重复利用高质量数据被证明极为有效,因为最终性能主要受优化步骤总数而非样本独特性的支配。总体而言,这些结果为通过强化学习后训练高效扩展大语言模型的推理能力提供了理论基础和实践指导。